Sound Barrier

The first Broadway show I ever saw was “Hello, Dolly!,” which had recently been recast with Pearl Bailey and Cab Calloway in the lead roles.1 It’s hard to recapture how thrilling it was to experience a fully staged musical for the first time. The sets! The lighting! The costumes! Singing and dancing! A live orchestra! I was totally transported.

Well, almost totally transported. There was something weird about Pearl Bailey’s voice. At first I thought that she was lip-syncing to a recording. She was, in fact, performing live, but she had apparently been fitted out with a primitive wireless microphone hidden in her costume, and her voice seemed to be coming from everywhere but her mouth. Her audio was piped through a set of loudspeakers flanking the proscenium, producing an effect similar to hearing the principal give a speech at a school assembly.

What I had witnessed was the early encroachment of amplification into stage performances. Nowadays, every performer in a Broadway show is miked, and there’s little effort made to hide the headsets and transmitters. Shows have sound designers, and audio technology has improved, so there’s no longer any “Hello, Dolly!”-style artificiality. In fact, I’m sure that most people don’t notice the amplification, or if they do, don’t mind it. Perhaps they even appreciate it, since it makes every performance sound as smooth and balanced as the cast recording they can buy in the lobby.

But something has been lost in the process — the raw immediacy of the live performance. In what sense is the performance “live” if what the audience hears is not the sound coming directly from an actor’s mouth, but instead an artificially mediated facsimile? The actor’s voice is being converted to electric signals, amplified, processed electronically in any number of ways (volume, tone, reverb), mixed with the output of the orchestra and the other actors’ voices, and pumped through speakers that are nowhere near where the actor is. (In fact, one of the sound designer’s jobs is to make the voice appear to be coming directly from the actor’s mouth, even though it isn’t.) Add to that the fact that every note is precisely timed — the orchestra is almost always playing along with a metronome-like recording called a click track — and what you have is a performance that might as well have been prerecorded.

In fact, in some cases it is prerecorded. It took me a while to realize it, but the musicals I’ve seen on cruise ships are performed almost entirely to digital tracks. The only voices that are live are those of the male and female leads. None of the other performers are miked, and the voices we’re hearing are not necessarily those of the people we’re seeing onstage. The fact that none of this was immediately apparent — that the experience of hearing canned performances is pretty much indistinguishable from hearing live ones — is less a compliment to the quality of the recordings than it is a knock on what we’re now willing to settle for when we go to the theater.

I must admit that the introduction of amplification has brought some advantages. For one thing, it has greatly expanded the range of styles that are considered suitable for a musical. Songs no longer have to be belted, Ethel Merman-style, or loaded with vibrato; they can be crooned, or whispered, or rapped. The type of rock-and-roll musical that Andrew Lloyd Weber pioneered with “Jesus Christ Superstar” wouldn’t have been possible without amplification both of voices and instruments. I’d hate to have to sacrifice “Hamilton” on the grounds that it’s technologically impure.

I’ll even grant that amplification has made the experience of the audience more equitable. Very few theaters have perfect acoustics; there are always going to be some seats from which actors’ voices can be heard better than from others. Modern audio technology allows everyone in the theater to hear pretty much the same thing. Not to mention that — as I can tell you from personal experience — having a mic makes the actor’s job much easier. I don’t blame Pearl Bailey for wanting to have that advantage.

Still, I can’t help but lament what’s been lost. There’s something qualitatively different about hearing sound waves that come directly from the vibrations of human vocal cords or a musical instrument. It’s an experience that we rarely have anymore outside of an opera house or a symphony hall.

For the past ten years, I’ve been working to preserve that experience by holding acoustic concerts in my living room. (I’ve had to suspend them since the COVID-19 outbreak, but hope I can bring them back when it’s over.) The concerts are obviously on a much smaller scale than Broadway musicals — the performers are folk musicians, jazz ensembles, ragtime pianists, classical duos and trios, a cappella groups, and the like — but they all have one thing in common: no amplification and no electric instruments. It’s an opportunity to hear music the way it’s meant to be heard.

My wife tells me that I’m the only one who cares about the no-amplification rule — that people come to these concerts because they like the musicians I book, not because they get to hear them unmiked. That’s probably true. But if nothing else, I’m exposing people to the now-revolutionary idea that music doesn’t have to be linked to audio engineering. Sometimes it’s just enough to put musicians and listeners in a room together, and let the sound waves propagate.


Read Me 2 comments

Blind Spots

My fifth-grade teacher was teaching us about the Panama Canal, and how it connects the world’s two great oceans with an elaborate series of locks. He described how a ship would enter the first lock, the gates would close behind it, the lock would fill up with millions of gallons of water to lift the ship up to a higher level, and another set of gates in front of the ship would open to let the ship pass to the next lock. It was all very impressive, but there was something missing from his explanation.

“Why are the locks there?” I asked. “Why are they needed?”

The teacher seemed never to have thought about this before. He paused for a moment and said, “Probably because the water level in the Pacific Ocean is higher than the Atlantic Ocean.” Satisfied with himself, he went on with the lesson.

His answer was patently absurd. Anyone who looked at a map could see that the two supposedly separate oceans were in fact different portions of a single body of water, and therefore couldn’t have different levels. But by that time, I’d learned from hard experience that it never pays to correct the teacher.2

For years afterward, I recalled that exchange with a bit of resentment and a big dollop of smugness. Why couldn’t he just have confessed that he didn’t know, instead of making up an answer on the spot? When I eventually became a teacher, I was always ready to admit when I was stumped by a student’s question — and in fact, I took it as an opportunity to model problem-solving behavior. “I don’t know,” I’d say to the student who asked the question. “Why don’t we find out?” And then everyone could watch my screen as I went to Google to investigate.

(Sometimes it was useful to say “I don’t know” even if I did know. If I was demonstrating how to use a piece of software — Photoshop, for example — and a student would ask a “what if?” question such as “What happens if I use the eraser on a type layer?” I’d answer, “I don’t know; let’s all try it and see!” hoping that the students would realize that they could easily answer such questions on their own.)

So it was easy to look back at my fifth-grade teacher and feel superior. But the longer I went on teaching, the more I realized that I had frequently been guilty of saying things that were absolutely wrong. I had told students that skin tone contained more blue than green (in reality, the opposite is true); that Tim Berners-Lee, at the time he invented the World Wide Web, was a physicist (he had a BA in physics, but that’s about it); that sans-serif characters weren’t used in ancient Rome (they were); that there’s no such thing as half a pixel (there sort of is), and many other things that I can’t remember now because, well, they were wrong.

Obviously, when I taught these “facts” to classes, I thought at the time that they were correct. Why I thought they were correct, I can’t say. But no matter where my supposed knowledge came from, I had clearly fallen victim to that age-old philosophical conundrum, “You don’t know what you don’t know.” (Or, in the famous words of Donald Rumsfeld, “There are unknown unknowns.”) I’m good at problem-solving, but I’ve never figured a way out of that one.

Back when I worked in educational publishing — before the days of computerized layout and spell-checking — I was preparing to send a workbook to be printed. This was the final step in a long process for which I was responsible, which included multiple passes of rewriting and editing, getting type from a compositor, proofreading the typeset copy and making corrections, and finally having a designer slice up the type and adhere it to layout boards, resulting in “mechanicals” that the printer would photograph to make plates.

As I was about to package up the mechanicals, a colleague of mine — an experienced editor who could spot a mistake across a room — happened to be walking by. She scowled at me and said, “You have a spelling error.”

“What?! Where?” I said. It was unlikely that a typo would have made it this far through the process, and at this stage it would be an expensive thing to fix.

“ ‘Ophthalmologist’ is misspelled,” she said.

“No it isn’t,” I said. “I know people think it’s ‘opthamologist,’ but I made sure the ‘l’ is in there — ‘opthalmologist.’ ”

“Yes, but there’s an ‘h’ missing,” she said. It’s not ‘opthalmologist,’ it’s ‘ophthalmologist.’ ”

I turned pale. “Are you sure?” I asked, knowing as soon as it came out of my mouth that it was a stupid question.

“If you didn’t know how to spell it, why didn’t you look it up?” she said, glaring.

“But I did know how to spell it,” I said — meaning, of course, that I thought I knew how to spell it. How was I supposed to know that I didn’t? Since that time, I’ve been aware that each of us is a storehouse of snafus that are waiting to happen. We can hear the ticking of the time bomb, but we have no way to know where the bomb is and when it’s due to go off.


Read Me 1 comment

Mouse

I’m not very good at catching things. My hand-eye coordination isn’t great in general, but catching is a special case — because if a solid object is hurtling toward the upper half of one’s body, the natural impulse is to get out of the way. This is an impulse that we are supposed to learn to overcome in childhood, but I never did. The best I was able to do is engage in a halfhearted performance of looking like I was trying to catch the thing while still getting out of the way.

Despite this handicap, I still manage to live among civilized people, because catching things isn’t an essential part of everyday life. But there are far riskier things that we must learn to do because one can’t lead a full life without them. Probably the best example is driving. I can get in a car and drive at high speeds on a freeway, closely surrounded by other cars, without giving it a second thought. On those rare occasions where I do give it a second thought — when I’m behind the wheel on an interstate, casually listening to a podcast while tearing along at 80 miles per hour, and suddenly think, “Ohmigod, what am I doing? Am I crazy?” — there’s no immediate equivalent to getting out of the way.

Think of how we routinely invite foods and medicines directly into our bodies without having any idea where they came from, or trust our safety to the strangers who repair our cars and build our bridges, or get into a shower with a bar of slippery soap. We don’t even think of these things as being dangerous, because to do so would essentially stop us from living our lives. There are, in fact, some people who refuse to do commonplace things on account of the real risks involved, but we tend to consider them deluded instead of recognizing that they’re being rational, and we’re the ones who are deluded.

It’s become commonly accepted wisdom that the key to success in life is facing and overcoming our fears. This doctrine overlooks the fact that fear exists for a reason: to keep us safe. The important thing is to differentiate between the fears that are irrational — fear of spiders, for example — and fears that are well-founded, such as fear of investing our life savings in a questionable business venture.

For some people, overcoming fear is an end in itself. They’ll decide to try skydiving, not because they think plunging thousands of feet toward earth will be a pleasant experience, but because they want to prove to themselves that they can do it. I can’t call such people unreasonable — statistically, deaths from skydiving are rare — but I’d think they’d want to reserve those fear-overcoming impulses for times where they can accomplish something more practical.

I once went ziplining over a Santa Cruz redwood grove, mostly because I was with a group of people who all wanted to do it. The prospect of doing it was frightening, but I realized that my fear was irrational, since people go ziplining all of the time without plummeting to their deaths. So I swooped over the redwoods, and survived. I guess that counts as proving to myself that I could do it, but I’m not sure how that’s valuable. I certainly wouldn’t do it again, because the one thing I remember about the experience is that it was scary. And unlike people who flock to horror movies and roller coasters, I don’t find being terrified to be enjoyable.

There have been times in my life when I’ve deliberately faced my fears, but those were times when I had something to gain by doing so. My first job after college was with an educational publishing company, where I rose quickly from production assistant to the person essentially in charge of audiovisual production. It was secure employment, doing something that I was good at, and I got regular paychecks and benefits. But after six years or so, I began to feel that I’d learned everything I could learn there, and so I decided to quit my job and become a freelancer. It was probably the most frightening decision I’ve ever made, since I had no safety net other than a small savings account, but it was something I knew I needed to do. The fear was intense but irrelevant.

Sometimes, there’s no choice but to face your fear. I vividly remember being stranded when I was twelve years old. I was on a teen study tour of France — a prize I’d won in a French contest — where I was known as Mouse, since I was the youngest one in the group. After five weeks in France, we spent a week in England, living in a dormitory in Reading and commuting by train to London. The first day, at the Reading train station, I somehow got separated from the others, who (unknowingly) boarded the train to London without me. So what do you do when you’re a mouse who’s alone in a foreign country? What I did was take the next train to London, spend a day sightseeing on my own — I remember going to Madame Tussaud’s and later shopping for souvenirs — and take the train back to Reading. I then hitchhiked from the train station back to our dorm. I’m not sure anyone ever realized I was gone.

Was it frightening? Definitely — especially the hitchhiking part. Would I do it again? Who knows, since I’m quite unlikely ever to be in the same predicament. But it’s good to know that when circumstances require, I can get past my fear and do what needs to be done.

Read Me 4 comments

Skipping and Jumping

As with pretty much everything else I do, I’m self-taught in computer coding, so figuring things out sometimes takes a while. The animated illustration in my post “Crossing a Line” looks simple, but writing the JavaScript that makes it work took three days. (Part of the difficulty was that it has randomness built into it, so that in the diagram — as in life — the action never repeats.)

For the most part, I enjoy the challenge, but there are times when it’s immensely frustrating. There will be a block of code that’s relatively simple and absolutely ought to work, but doesn’t. I’ll stare at every character and say, “Yup, that’s right,” and retrace the logic in my head and say, “Uh-huh, that makes sense,” and yet the code just stares back at me. I could blame this on my own ineptitude, except that professional programmers tell me that they encounter the same thing.

What amplifies the frustration is that I know that eventually I will solve the problem, since I always have in the past. The answer is right in front of me; I just haven’t discovered it yet. At times like that, I often wish that I could just skip the useless hours and jump ahead to the time when the problem has been solved, so I can get back to doing productive work.

The idea of jumping ahead in time has always had special interest for me, because it feels almost tangible: If I know that a particular moment in the future is going to happen, why can’t I just go there? We’re all accustomed to cuts in movies, where the time and place change in an instant, so I imagine it shouldn’t be too jarring for it to happen in real life. I don’t want to change the future; I just want to do some judicious editing.

But an interesting philosophical problem emerges when I ask myself what it would actually mean to jump ahead in time. The jump isn’t something that I could perceive while it’s happening, since it would be instantaneous. I’d only be aware of it once it’s happened. Therefore, “skipping ahead” is something that can be experienced only in memory: I’d remember sitting at my computer staring at a block of code that’s not working, and then seeing a moment later that the code has been rewritten (most likely in a stupidly obvious way) and is now working smoothly.

In that case, it seems like “skipping ahead” is an illusion that could actually be accomplished retroactively. If we imagine that there were some way to surgically operate on my memory so that the problem-solving hours could be removed, my post-operative experience would be indistinguishable from one in which time itself had somehow been edited. From my perspective, it would appear that I’d actually jumped a few hours into the future.

Of course, that raises the question of which person is me — the one who does all the frustrating work and then has his memory operated on, or the one who experiences a painless jump? Ideally, I’d want to identify with the latter me, the one who doesn’t even recognize that the former me (or at least a few difficult hours in the life of the former me) ever existed.

But there’s no reason why I shouldn’t equally identify with the former me, the one who actually did the work. That me has already spent the time fretting and experimenting and eventually solving the problem, so how would it benefit him to have that time surgically removed from his memory? He’s already at the point where his memory would resume, so why not just get on with further coding?

So it turns out that the operating-on-the-brain solution really is no different from the idea of physically jumping ahead in time (whatever that might mean). In either case, someone is going to go through the mental agitation that leads up to solving the problem, and either way, that person has to be me. Consider that fantasy dashed, then.

As a postscript, have I mentioned that I was a philosophy major in college? Engaging in philosophy requires conducting this sort of thought experiment all the time — going around in circles as you try to come up with an answer to a philosophical question. If I’m going to have this frustrating experience of working on a problem, I’d rather do it with computer code, because at least I have something concrete to show for it at the end.

Read Me 7 comments

Learning Backwards

When I was in third grade, my class was introduced to a strategy called SQR3, which was supposed to improve our reading comprehension. It required that we engage in three steps when we encounter any new piece of reading material:

  1. Skim. Briefly look through the text to get a sense of what it’s about. Get additional clues from the book jacket or in the table of contents.
  2. Question. Come up with some questions that you think the text will be able to answer.
  3. Read. Read the text with the aim of finding answers to your questions.

I found this prescription galling. First, nobody has any right to tell me how to think; what I do with my mind when I read is my own business. And second, why waste time with the first two steps when I can just read the damn book? I silently rebelled by refusing to engage in SQR whenever I wasn’t explicitly instructed to.

Around the same time, my father was encouraging me to go through the newspaper every day, from the front page to the last. “You don’t have to read the articles,” he said. “Just look at the headlines. Then if you come across an article that interests you, you can read it.” That seemed pointless. I already knew what I wanted to read in the paper: the comics and Ann Landers’s advice column. (I don’t know why a third grader would be so attached to reading Ann Landers, but I was. Perhaps it’s because she appeared so much more sensible than the other adults in my life.) I dismissed my father’s recommendation as just another thing that your parents tell you to do because it’s good for you.

The funny thing is that many years later, I realized that I was doing just those things that I’d rejected as a child: I was skimming and anticipating before reading a book, and I was diligently looking through the news headlines every day. And it wasn’t because I’d been taught to do those things when I was young; I’d long forgotten about SQR. It was because they were natural outgrowths of curiosity: If you’re interested in the subject of a book, you’ll naturally want to get some context before diving in. If you’re interested in the news, you’ll naturally want to glance at the headlines every day.

In other words, everything I was taught was backwards. Learning rote behaviors doesn’t create interest; instead, having interest leads to those behaviors. Once I realized that, lots of other inexplicable things made sense.

Take organized religion, for example. From the time I began my Jewish education as a young child, I was mystified by what was expected of me. Instead of being taught facts as I was in public school, I was being taught a set of unprovable beliefs in Hebrew school. I had friends who went to Catholic school and were being taught beliefs that were entirely different, yet they were supposed to accept them in the same unquestioning way that I was supposed to accept Jewish ideology. Why was that considered normal? Why would anyone subscribe to a religion that arbitrarily told them what to believe, instead of allowing themselves the freedom to believe whatever they wanted to?

It wasn’t until I was in my 20s, and was drawn into a Quaker community where I felt surprisingly at home, that I realized I had it backwards. People didn’t form religions in order to be told what to believe; instead, they had beliefs, and they found support by gathering with people who had beliefs similar to theirs. That was a perfectly natural and understandable thing. If there is pressure to believe what one’s family and community believe, that’s a corruption of religion, not inherent in the idea of religion itself.

I was reminded of the backwards nature of our attitude toward religion after my younger sister died a few years ago. Although she had been diagnosed with pancreatic cancer a year earlier, her actual passing was difficult to accept, and I remained in a state of shock. My wife asked me whether I wanted to sit shiva, and I said no. Sitting shiva is a Jewish tradition in which, after the death of a family member, the remaining members of the immediate family gather together for a week to support each other in mourning. (“Shiva” is the Hebrew word for seven, referring to seven days.) People who are sitting shiva don’t do any work, don’t prepare meals, and don’t leave the house; instead, friends come by with food and condolences.

I objected to sitting shiva because, as has been true since childhood, I resist following Jewish traditions simply because I’m supposed to. “If you don’t want to sit shiva,” Debra asked, “what would help you feel better?”

“I don’t know,” I said. “I don’t want to do anything. I think I just want to stay home, with you. If friends want to come by, that would be nice.” I began to realize what I was saying. “And maybe they can bring food.” We both laughed. Clearly, sitting shiva wasn’t needed because it was a tradition; it was a tradition that came about to fill a need. It was yet another thing I’d learned backwards.


Read Me 1 comment