Historical Record

To introduce the theme of encounters with unfamiliar technology, my earlier post “Hi, Tech!” began with a famous scene from “Nanook of the North” in which the titular Nanook bites down on a phonograph record. I knew that the most appropriate illustration for the post would be a photo of that moment, so I downloaded a copy of the film — it’s in the public domain, so online copies are plentiful — and began to search, one by one, for the frame that would work best as a still image.

Frustratingly, none of the actual frames from the film matched the iconic picture that I had in my mind. There was one frame where the record was positioned just right, but Nanook’s body was twisted in an awkward position that made it difficult to make out what he was doing. There was another where Nanook’s body was positioned perfectly, but the record was reflecting light directly into the camera and therefore looked like a featureless, glowing disc. Furthermore, every frame that I looked at had compositional problems: Nanook’s head was partially cut off, or his fur-lined hood was indistinguishable from his hair, or the white pelts that covered his upper legs blended with the white background, making him appear legless. To my dismay, the perfect moment that existed in my memory didn’t exist in the film.

The image that I ended up posting is a composite of elements from four different frames. It took me hours to put together, which felt ironic because the actual filming of “Nanook of the North” was done on the fly, unrehearsed, as the director Robert Flaherty stood in the cold and wind and pointed his camera. None of the fleeting moments that he captured was ever meant to be examined as closely as I examined those individual frames, cutting them apart pixel by pixel.

As I worked, I couldn’t help but reflect on the miraculous process that allowed these nearly 100-year-old images to land on my computer screen. Out there in the Arctic tundra, sunlight reflected by Nanook and his companions passed through the lens of Flaherty’s hand-cranked camera and chemically altered the light-sensitive coating on a strip of film, creating a negative. Each night, Flaherty developed and printed the negative onsite in a primitive darkroom. (He always made sure to show the Inuit participants what he had shot the day before, thus encouraging their continued collaboration.) Eventually, the negative was carried back to the United States, edited, and brought to a laboratory, where it was used to create prints that went out to cinemas for public exhibition.

Over the years, people struck duplicate negatives from existing prints and made new prints (which, unfortunately, were of successively lesser quality with each new generation). At some point in our own century, somebody digitized one of those prints, converting the patterns of light and dark into a series of ones and zeroes, and uploaded that information to a worldwide network of computers. From there, it was downloaded to the hard drive of an iMac in a house in Oakland. And now those ones and zeroes were displayed by my computer as patterns of light and dark on its screen, essentially duplicating the patterns of reflected sunlight that long ago had landed on the film in Flaherty’s camera.

It wasn’t supposed to be this way. For millions of years, human life was evanescent — a series of infinitely small moments that happened and, just as quickly, vanished. When a person died, their disappearance was complete. Yet, uncannily, Nanook was here on my monitor, grinning at the camera as he watches a record spin on a phonograph and then puts it in his mouth. The pixels I was dragging around in Photoshop were ghosts of things that no longer existed.

I know what became of Nanook (whose real name, deemed by Flaherty to be unpronounceable by the moviegoing public, was Allakariallak). He died, most likely of tuberculosis, two years after the film was released. But since much of my retouching work involved the phonograph record, I found myself wondering about it. It’s not just an abstraction, a symbol of a record. It’s an actual, particular, physical object — something that existed, that had a past and a future.

Someone — we don’t know who, since the label is illegible — had once gone into a studio and sung into a large horn that conducted the sound waves to a needle, which carved a spiral groove into a waxlike coating on a metal disc. A metal “master record” made from that original was pressed into a shellac mixture and trimmed, creating a copy that someone — perhaps Flaherty himself — found and bought in a record store. That record, taken to the Arctic, appeared briefly in front of a camera, leading to its improbable immortality. And then what? It’s unlikely that the record still exists, a century later. It was no doubt listened to many times off-camera, providing entertainment to any number of people. Eventually its groove wore out, or it shattered, or landed in storage in somebody’s basement and was eventually disposed of. Its atoms, no doubt, have long been scattered through the universe. And yet, there it is, reflecting light, holding an unknown human voice, spinning on my computer screen.

When I drive on the lower deck of the Bay Bridge, I look at the countless rivets holding it together, and realize that each of those rivets was put there by a person with a rivet gun. For each, I wonder, who was the person in the early 1930s who hammered in that particular rivet? What did he have for breakfast that day? Did he have a family to go home to? Did he like his work, or would he rather have been an engineer, or a sailor? How long did he live, and how did he die? How can I express my gratitude to him for helping to build the structure that is supporting me right how, keeping me from falling into the bay? Every manufactured object we see embodies a human life (or many lives). The “Nanook of the North” phonograph record, which began with a voice in a studio and found its way into the mouth of an Inuit hunter, exists now only as an illusion, a collection of illuminated dots, but it still carries the spirits of the people who created it and interacted with it. How much more so for the things around us that we can still reach out and touch!

Read Me Leave comment

Fair Minded

One of the highlights of my childhood was my visit to the 1964–65 New York World’s Fair. There was the Sinclair exhibit with its life-size dinosaurs, and Ford’s Magic Skyway, where you could watch the entire history of the human race go by from the comfort of a self-driving Mustang convertible. There was the Illinois pavilion, where Disney’s audio-animatronic Abraham Lincoln miraculously stood up and gave a speech, and the IBM pavilion, where the audience was hydraulically lifted into a giant egg and dazzled by an immersive multimedia show.  There was DuPont’s “Wonderful World of Chemistry,” in which live actors sang, danced, and interacted with filmed actors projected onto moving screens. And there were technological innovations that I’d never seen before: color TV, “Touch-Tone” phones with buttons instead of dials, and IBM Selectric typewriters, where the type element moved along a track while the carriage stood still.

As I got older, my wondrous memories of that fair led me to be interested in another exposition that once had been held on the same site: the 1939-40 World’s Fair. Unlike the later fair, which was a hodgepodge of futuristic architectural styles, the 1939 fair was a visual delight, featuring Art Deco graphics and clean Modernist architecture. It had an overarching theme — “The World of Tomorrow” — intended to lift the spirits of a population that had weathered the Great Depression and was looking ahead to a better and more prosperous world. Its most famous exhibit was General Motors’ Futurama, which displayed an imagined model city of 1960, with gleaming suburbs connected by a network of fast, efficient highways (a new idea at the time). I’ve watched films taken at the fair and seen exhibits of its relics, but I’ve always wished I could have experienced it in person.

One reason for my emotional attachment to that fair was that my mother had been there. I remembered her stories about the majestic size of the fair’s centerpiece, the Trylon and Perisphere; about seeing television for the first time; about being introduced to nylon stockings; and about trying out a new type of pen, the ballpoint, which didn’t have to be dipped in ink. Compared to those things, push-button phones and improved typewriters felt trivial.

A few years before her death, I told my mother about how I’d been influenced by her descriptions of the 1939 fair when I was growing up. I expected her to lapse into warm reminiscences, but instead she looked at me like I was crazy.

“What are you talking about?” she said. “I never went to that fair. I was five years old! Even if I had gone, I wouldn’t have paid attention to things like pens and nylon stockings.”

I quickly did the math, and was stunned to realize that she was indeed five years old in 1939. Evidently, she had never told the stories that I so clearly remembered her telling. I still have no idea where those false memories came from.

That conversation left me shaken. What other memories, what other explanatory stories, were pure inventions? So much of my sense of who I am comes from remembered events and conversations. How can I be sure that any of them are real?

The answer is that I can’t — especially now that all of the members of my immediate family are gone. Other than me, there are no surviving witnesses to my childhood. There is no objective reality about my formative years; there is only what’s in my head. The stories that form the basis for much of what I’ve written in my blog posts may be entirely fictional.

My only consolation is that if they are fiction, they’re pretty good fiction. I don’t think I have the skill to have made them up consciously. Perhaps I’m not a product of my past; I’m just a product of what my current brain thinks was my past. If so, that doesn’t stop me from drawing lessons from it.

Still, who would have thought that The World of Yesterday was as much a product of imagination as The World of Tomorrow?

Read Me 1 comment

Hi, Tech!

There’s a famous scene in Robert Flaherty’s silent documentary “Nanook of the North” in which Nanook, an Inuit hunter whose daily life the film depicts, visits a distant trading post. There, he watches in wonderment as a trader plays a record on a phonograph and explains (as the intertitle tells us) “how the white man ‘cans’ his voice.” The trader lifts the record off the turntable and hands it to Nanook, who examines it closely and then bites into it, perhaps to find out how the voice is stored inside.

The scene, as we now know, was staged. By 1922, the year the film was made, the Inuit were well acquainted with technology, and actually hunted with rifles rather than the old-fashioned harpoons they’re shown to use in the film. The real-life Nanook knew perfectly well what a phonograph record was. Presumably, he and Flaherty thought that American audiences would find the record-biting scene amusing. In spirit, it’s really no different from the scene in “Star Trek IV: The Voyage Home” in which the crew of the Enterprise time-travels back to 1986, and chief engineer Scotty, encountering an early Macintosh computer, speaks into the mouse to give the computer instructions.

We instinctively find it funny when someone is mystified by technology that we take for granted. Debra and I laugh when our 20-something goddaughters don’t know where to put the stamp on an envelope, or have trouble reading an analog clock. The thing is, for our goddaughters, it’s not funny at all. These are actual things that they don’t know, and they’re encountering situations in which they’re expected to know them. Our laughter really isn’t helpful. I know this because I’ve often been in the same situation that these young women are in.

There was the time when an outdoor event was going on at Chabot College, where I was a faculty member. A colleague of mine who had been charged with photographing the event needed to step away for a few minutes, so she handed me her camera. “Would you mind taking pictures for a while?” she said.

The camera she handed me was a fancy SLR, with a long lens and numerous buttons and knobs. “I don’t know how to work this,” I told her.

“What do you mean?” She thought I was putting her on. “You teach Photoshop, don’t you?”

“Yes, but that doesn’t mean I know how to take photos,” I said. I’m actually quite camera-phobic. If I had to take pictures, I always made do with an inexpensive point-and-shoot camera, and polished the photos in Photoshop. If I needed really high-quality photos for a client job, I hired a photographer.

“Never mind,” she said, shaking her head, and took the camera back. I don’t think she ever believed that I didn’t know how to use it; she thought I was just being an asshole.

There was also the time when, as an underemployed freelancer, I swallowed my pride and called a temp agency to see whether they could get me some work. The man I spoke to asked me the standard questions (what my availability was, how fast I could type, etc.), and then asked whether I had any special skills that might be of use in an office.

“I can do page layout,” I said.

“Great,” he said. “What do you use? Quark? Pagemaker?”

“WordPerfect,” I said.

WordPerfect?” he said. “That’s a word processor. I thought you said you did page layout.”

“I do,” I said. “I do page layout in WordPerfect.”

I was an early adopter of WordPerfect, became one of their beta testers, and eventually handled WordPerfect support on CompuServe. I was a WordPerfect expert — I could make it do anything, including page layout. I had no reason to spend hundreds of dollars on professional page-layout software when, as a beta tester, I got WordPerfect for free.

“If my pages look the same, and my clients can’t tell the difference by looking at them,” I asked the man, “why does it matter what software I use?”

“It matters,” he said, “because nobody is going to hire you to do page layout in WordPerfect!”

Like Nanook and Scotty, I’m not stupid. Given the opportunity (and the money to buy the expensive hardware or software), I could figure out how to use a camera or a page-layout program. It’s just that the need had never arisen before.

That’s why I tend to be sympathetic when people don’t know things that I expect them to know. Like the time when, on the first day of class, I was teaching my PC-using students to use the Macs that were in our classroom. I showed them how to use the buttonless mouse, where the Dock is, and what the Apple and application menus are for. I showed them the proper way to eject a drive and how to translate PC helper keys (control, alt) to Mac helper keys (command, option). About half an hour in, one shy student raised her hand.

“Hi!” I said, “You have a question?”

“Yes,” she said. “How do you turn it on?”

Read Me 1 comment

On the Face of It

Jay, me, Krishna, and our respective beards at college graduation

When I reached adolescence and began needing to shave, my father gave me his old Remington electric shaver. I never liked using it. I didn’t like that I had to depend on a machine every morning, I didn’t like the noise it made, and I didn’t like the slightly sandpapery way my face felt after I used it. Eventually, I asked my father to show me how to shave with a razor.

“Why?” he asked. “It’s so much easier with the electric one.” That seemed an odd thing for him to say, since he shaved with a razor every morning. I think he just didn’t want the responsibility of teaching me, because he was known to cut his face occasionally. But he gave in and showed me how to use a razor and shaving cream. From then on I shaved the old-fashioned way, and, not surprisingly, cut my face occasionally.

It never occurred to me to grow a beard until I was in college, when I met my friend — later to be my roommate — Krishna. He had a beard, and told me that had grown it as soon as he was physically able to, because he had a pudgy baby face and wanted to look more mature. “Besides,” he said, “I don’t want to live in a culture that requires you to put a blade to your throat every morning.” (Krishna was the kind of guy who could make pseudo-profound statements like that and sound cool doing it.) Still, I resisted the temptation to grow one of my own. Where I was raised, in the conservative suburbs of Long Island, people didn’t have beards. Besides, I was performing regularly with the mime company I’d founded, and I’d never known of a mime who had facial hair.

The barrier finally broke the summer after my junior year. My girlfriend had left the country for the summer, I was living alone on campus, and I was dramatically in mourning. I moped around, wore dark glasses, and stopped shaving. When I finally re-entered the world and took off the dark glasses, I discovered that I had a beard, and it actually looked pretty good. I also found that I had no desire to resume putting a blade to my throat every morning.

When I went home to visit my parents, they were not impressed. “You’re going to shave that off, right?” my mother said. “You don’t want to look like that when they take your graduation picture. You’re going to have that picture for the rest of your life.”

“You know who you look like?” scowled one of my parents’ friends at the synagogue. “You look like Jesus Christ!” Clearly he had never studied art history, because my beard looked nothing like Jesus’s. Not to mention that my hair was shorter.

I did, in fact, keep the beard, and despite my mother’s warning, it appears in my graduation photo. I went clean-shaven a few years later because of some acting roles, and by chance I met Debra, who was to become my wife, during that beardless period. When she went away on a planned trip to China, I took the opportunity to grow my beard back. I promised her that when she returned, if she didn’t like how I looked when she returned, I’d shave it off again. Fortunately, she did, and I didn’t.

Over the succeeding years, as I gradually lost the hair on top of my head, I was happy still to have hair at the bottom of it. My beard turned fully gray just about the time I began my teaching career, transforming me into the perfect model of a college professor.

A musician I know, who sports a similarly gray beard and a shaved head, once suggested that I start shaving my own head. “You’d rock that look,” he said. At the start of the COVID-19 pandemic, when it became clear that haircuts would not be available for a while, I took him up on the suggestion. Although I had long ago abandoned putting a blade to my throat, I was now regularly putting a blade to my scalp, and not surprisingly, cutting myself in the process.

“Why don’t you try an electric shaver?” asked our goddaughter Shaelyn.

“No way,” I said, and told her about my experience with my father’s Remington. “It’s noisy, and it just wouldn’t shave close enough.”

“You know,” she said diplomatically, “it’s possible that shaving technology has improved in the past 50 years.”

She had a point. (I hate when that happens.) I did some research, and found something called the Skull Shaver Pitbull, which has four pivoting rotary blades and is expressly designed for shaving heads. I bought one, and now I’ll never turn back.

Krishna died a few months ago, but not before he had a chance — via Zoom — to admire my new look. I think of him every time I stroke my beard.

Read Me 2 comments

The Wrong Thing

In 1970, the year of the first Earth Day, I wrote a song called “X-Rays Coming Out of My TV.” It was a satirical folk-style song, inspired by Tom Lehrer’s “Pollution,” about how technology was destroying the environment. It was pretty sophisticated for an eighth-grader, but was seriously mediocre on any objective scale. My mother, however, was convinced that it would be my ticket to fame and fortune, and she somehow found a music publisher in New York City who was willing to talk to me. He scheduled an in-person meeting and requested that I bring a demo of the song.

I recorded my guitar and vocal on a reel-to-reel tape recorder, and borrowed a second recorder so I could add a vocal harmony track and, for good measure, a tambourine. Realizing that a devious New York publisher might try to take advantage of a naïve boy from the suburbs, I took the precaution of filling out an application and paying a fee (taken out of my allowance) to register a copyright with the U.S. Copyright Office.

The music publisher was surprisingly diplomatic. He listened to the demo tape, told me that it really hadn’t been necessary to add the tambourine, and gave me a couple of albums to listen to — John Prine and Randy Newman — to assist in my development as a songwriter. Then we went home, at which point I assumed that my mother would stop embarrassing me by bragging about my song. But that was not to be. “My son had a song copyrighted!” she told everyone who would listen.

Despite my explaining to her many times that anybody could have anything copyrighted, she talked about my copyright proudly for years. The time I’d put into writing the song didn’t matter; the important thing was a routine transaction that had taken me a few minutes. Since that time, I’ve noticed how often, in the same way, people place value on insignificant things at the expense of significant ones.

Jugglers and acrobats have a variety of tricks in their repertoire. Some are easy but look difficult, and some are very challenging but look easy. According to my college roommate Jay, who has juggled professionally for more than 40 years, an underappreciated item in the juggling repertoire is two people juggling eight clubs. “It takes an incredible amount of practice to get it,” he says, “but once you do, it looks just like juggling seven clubs.” Ideally, the audience would appreciate the skill and discipline required to make such a difficult trick look easy, but instead, they reserve their biggest reactions for the less-subtle tricks. “Juggling an apple and eating it,” Jay says, “is not particularly hard.” But it’s a guaranteed crowd pleaser.

Aerialists — those acrobats who perform high over your head, dangling from ropes or trapezes — take years to learn their craft. Audience members typically remain silent as they watch the performers execute their intricate maneuvers, but one thing every aerialist learns is that if he or she (usually she) simply does a split, the audience responds with instant applause. Aerialists actually have a name for this phenomenon: “claps for splits.”

One thing I’ve always wondered is — to borrow a trope from Jerry Seinfeld — what’s the deal with tap dancers? They can create incredibly rapid, varied, syncopated rhythms throughout a piece of music, but the climactic move is always the one where they lean forward, face the audience, and run in place while swinging their arms. That always seems to bring down the house, despite being the least artistic part of their act. I can only imagine that audiences applaud wildly at that point only because of many years of conditioning.

My experience as an actor in, and writer for, a children’s theater troupe taught me that the one surefire way to get an audience of children to laugh is to have a character fall down. There’s always a moment near the end of a show where the kids get restless and their attention starts to wander. I always made sure to write a pratfall into the script at that point, regardless of whether it was motivated by the story. Nothing else — no matter how clever a joke is, no matter how elaborately a gag is set up — gets the same reaction.

As a community college teacher of digital art courses, I’ve always been surprised at my students’ response when a classmate shows an especially skillful piece of work to the class. Instead of asking what inspired the work or how it was accomplished technically, they tend to ask earnestly, “How long did that take?” Then they marvel at the amount of time the artist devoted to the work rather than at how well done it is.

I suppose that’s not much different from my parents’ attitude toward my own work. No matter what I produced, whether it was for school or for personal expression, they were much more concerned with how it was received — whether it got me a good grade, or whether it won an award, or whether it got me into the local newspaper — than the thing itself. And that extended into my adulthood. Ten years after I wrote “X-Rays Coming Out of My TV,” I wrote a one-act play called “Reel to Reel,” about the troubled owner of a recording studio. By this time, I had graduated from college, was working in publishing, and was living on my own in New Jersey. I decided to enter the play in a local playwriting contest, but first, I again took the precaution of registering it with the Copyright Office.1

I won the contest, my play was produced, and my parents came to see it on opening night. Despite not having read the play and knowing nothing about it, they presented me with a plaque they’d had made to commemorate the occasion. Engraved in brass affixed to a cherry-wood rectangle, it said, “Congratulations to Mark Alan Schaeffer, author of the prize-winning play ‘Reel to Reel©.’ ” It was a sweet and thoughtful thing for them to do, but the thing my mother most wanted credit for was that she’d made sure the copyright symbol was appended to the title.


Read Me 2 comments