Tune That Name

Our household uses very little butter — mostly just to prepare eggs for breakfast — so it’s not surprising that I’ve only just had my first encounter with Challenge Butter. (The brand, I’ve since found out, has been around for more than a century.) I don’t know how the package ended up in our kitchen — it was probably bought on a day when the plain store brand was unavailable — but it certainly gave me a start when I opened the refrigerator door. The box portrays a pristine lake surrounded by evergreens, similar in style and coloring to the illustration on the Land O’Lakes box, but the placid scene is dominated by an outsized, heavily antlered deer at its center. This is not one of your meek, Bambi-like deer; it’s a deer that would cause you to freeze in its headlights (if it had headlights). This is a deer that says, “I challenge you to eat this butter.”

The thing is, I don’t want my butter to be challenging. Butter is supposed to be compliant — it’s supposed to spread when you want it to spread, and melt when you want it to melt. I’m trying to imagine the focus group at which they tested the name “Challenge Butter”:

“That is the stupidest name for a dairy product that I’ve ever heard.”

“Oh, yeah? Want to take it outside?”

Contemplating this badly branded foodstuff takes me back to my childhood, when I made frequent use of an unappealingly named product called “Testor’s Pla.” Pla was an oil-based paint, an enamel, that was most often used for detailing plastic model kits. It was sold in tiny bottles, holding a fraction of a fluid ounce, for about twelve cents apiece. At the five-and-ten-cent store, a rack displaying Pla bottles in dozens of different colors made purchasing one almost irresistible.

But the name! Our neighbor Jackie, who used to babysit for my sister and me, would always make me laugh when she looked sadly at my row of bottles and drawled “Pla-a-a-ah,” as if she were trying to eject something distasteful from her throat. The only way I could account for the name was to imagine that it was originally something like Placenta or Plantagenet, but that the full word was too long to fit on one of those tiny labels.

The Testor Corporation is still around, but the name Pla appears to have been retired in favor of “Testor’s Enamel.” I’m hoping that the trend will continue, and that we can look forward to eventually seeing “Encouragement Butter.”

I can’t conclude a discussion of unfortunate product names without returning to one of my pet peeves, “cheese.” Obviously, “cheese” is a generic term, not a brand name, but that makes it all the worse, because every derivative of milk curds therefore receives that unappetizing designation. Any word that consists of a harsh consonant followed by a whining, constricted vowel sound can’t refer to anything good. (See “pet peeve,” above.)

Try saying it out loud: “Che-e-e-e-se.” Does that really sound like something you’d want to ingest? The fact that it echoes the universal childhood expression of distaste, “E-e-e-e-w,” can’t be coincidental. With so many English food terms having been borrowed from French, I wish we could have gone with the lovely word “fromage.” I’ve always been resistant to putting cream cheese (“cre-e-e-m che-e-e-se”) on my bagels, but I’m sure my taste would have developed differently if the traditional topping had been “le fromage à la crème.” As it is, I generally eat them cheeseless, topped instead with the most docile butter I can find.

Read Me 2 comments

Fetch

Not long after I’d acquired the minimal necessary reading and math skills, my father introduced me to line graphs. He showed me how a line projected from the X-axis intersected with a line projected from the Y-axis to form a data point, and how those data points could be connected by line segments to show a trend. I loved the idea that numbers could be converted into elegant drawings that would then reveal hidden information.

My father gifted me with a stack of graph paper, and I asked him for some data to graph. He complied by giving me paired lists of measurements: temperature changes by date, sales figures by month, population growth by year. In those pre-internet days, those sorts of statistics were difficult to come by, so he simply made up the numbers. I didn’t care. I delightedly graphed all of the information he gave me, presented him with the finished graphs, and then asked him for more numbers.

After a few days of inventing data sets and handing them over, he apparently began to regret what he had started. “You know,” he said finally, “you don’t need me to keep supplying you with data. You can make up the numbers yourself.”

I was crestfallen. Here was yet another instance of my father just not getting it. What possible joy could there be in graphing numbers that I’d concocted on my own? The whole point was to take data that was given to me in one form, convert it into another form, and hand it back in its improved state. If I was going to randomly invent numbers and graph them, I might as well skip the first phase altogether and simply draw random line segments on graph paper.  Without a sense of purpose, the task was meaningless.

In retrospect, I can’t blame my father for wanting relief from having to generate all of those figures. I’m surprised that he was initially willing to do it at all. Nevertheless, his abdication ended my interest in graphing, and my remaining supply of graph paper went unused.

In the sixty or so years since then, I can’t claim to have changed much. I’m still really content only when someone gives me creative work to do or problems to solve. Not only do I get the reward of making someone else happy; I also get to learn new things along the way. That’s why I continue to seek out work even in retirement, even if I don’t get paid for it. I’m like the dog who approaches with pleading eyes and a stick in his mouth, begging you to throw it so he can run and retrieve it. The dog isn’t about to toss the stick himself and then bring it back — what would be the point of that?

People are surprised that I carry my laptop with me when I travel, and that I happily tap away at it whether I’m on the deck of a cruise ship or in a booth at an English pub. Make no mistake, the middle of the ocean or the middle of London are two of my favorite places to be. But it’s never enough just to be in an environment; I have to do something while I’m there, and why not do the thing that gives me pleasure and makes me feel useful?

I seem to be in the minority in this regard. When I taught digital arts courses at Chabot College, I used to pride myself on coming up with unusual and challenging assignments for my students, such as “Create a still life using only two of the three primary colors,” or “Take an ordinary snapshot of a person and transform it into a glamour portrait.” No student likes to be confronted with a difficult exercise, so I tried to explain that I was actually giving them a gift. “Everything I’ve ever learned in my professional life,” I would say, “has come from figuring out ways to complete tasks that I was hired to do, in a way that would satisfy my clients, using whatever resources I had at my disposal. So what I’m offering you is an opportunity — an opportunity to learn.” I don’t think many of my students were convinced.

For me, at least, solving an assigned problem is the only way to learn. In cases where I don’t have a client telling me what to do, I have to invent one. For many years, I had to assign myself the project of creating something that could be called “art,” for a discerning client known as the annual faculty show at the campus art gallery.  And each summer, when I was preparing to teach new material in my fall courses, I would give myself an assignment that would require me to master new skills, such as “Use elementary JavaScript to create a virtual game of Whack-a-Mole” or “Construct a series of three-dimensional household objects using Adobe Illustrator.” The imaginary client in these cases would be my students, who would be ill-served if I tried to teach them skills that I wasn’t myself proficient in.

For this blog, you, the reader, are my client, and my task is to regularly find something to say that you’ll find unusual and interesting. Every time I’m able to finish one of these posts, it’s because I’ve summoned up a mental picture of you throwing a stick.

Read Me 5 comments

Food for Thought

“You deserve this spoon cake,” said the headline on the LifeHacker website. I wondered what this implied about the quality of the spoon cake, given that I’d accomplished nothing worthwhile that week. But then I remembered that LifeHacker had no means to assess my degree of merit — it didn’t even know who I was. The headline was meant to suggest that everybody deserves this spoon cake (and, by implication, that the spoon cake is delicious).

Here’s the problem: To “deserve” something generally means that one has done something to earn the thing (or at least has done nothing to forfeit the privilege of having it). The word’s purpose is to distinguish those who are deserving from those who aren’t. But if everyone deserves something, the word becomes meaningless.

I first encountered this problem many years ago when McDonald’s began running commercials saying “You deserve a break today.” I was a teenager when this slogan came into being, and even then, I found it insulting. Clearly, McDonald’s was trying to flatter me, to contrast me with those sluggards who hadn’t been doing their work and therefore were unworthy of getting a break. But McDonald’s had no way to know that I wasn’t a sluggard, and therefore their claim was disingenuous.

“Why would anybody take those commercials seriously?” I asked my father.

“Those ads are intended for people who aren’t going to think about them too much,” he said. “You’re not one of those people.”

I’m reminded of this, oddly enough, because I recently encountered a young woman wearing extremely torn jeans. When I say “extremely,” I mean that pretty much the entire front of each pants leg was missing, from the lower thigh to the upper calf.

Now, I can think of two practical reasons to wear pants: One is to protect your legs from rain, cold, or sun; the other is to cover your legs for the sake of modesty or dignity. Clearly, these jeans served neither purpose, so the only other reason I could imagine for her choice of wardrobe was to make a statement.

But what sort of statement? Did she mean to communicate that she was a rebel, too cool to care what people like me thought? Did she wish to demonstrate that she was too spiritual and idealistic to concern herself with material things? Did she simply want to fit in, because all of her friends were wearing extremely torn jeans?

I suppose you could say that — as with the McDonald’s ads — my failure to understand her message means that I was not part of her intended audience. She was wearing those jeans solely to appeal to people who, unlike me, would understand why she was wearing them. As for me, I’m presumed to just continue along my way: Nothing to see here!

But something about that conclusion feels a little too facile — too close to the logical fallacy known as “no true Scotsman.” For those of you who aren’t acquainted with the catalog of logical fallacies, the traditional illustration is this: One man states a rule or generalization, such as “No Scotsman puts sugar on his porridge.” Another objects, “Well, I’m a Scotsman, and I put sugar on my porridge.” To which the first one responds, “Well, no true Scotsman puts sugar on his porridge.” In other words, the first person contrives to make his rule unfalsifiable by specifically excluding any counterexamples, thereby making the initial statement pointless.[1]

To say that “if you don’t understand the message, then it wasn’t intended for you” has a similar effect: It automatically excludes the possibility that the message is incompletely thought out, or badly expressed. If “you deserve this spoon cake” is meaningful only to people who already believe that they deserve that spoon cake, it’s not a very useful assertion. There are plenty of people who don’t feel worthy of spoon cake, but would likely still enjoy it if it were offered to them.

Imagine how much more effective our political discourse would be if we could find ways to express things that are clear to everyone, regardless of their preconceptions. (Perhaps something along the lines of “Lots of people think this spoon cake is really yummy!” or “If fast food is a treat for you, consider getting it at McDonald’s!”) People would still disagree, but at least they would have a shared understanding of what they’re disagreeing about.


[1] For another example of “no true Scotsman” — this one involving concealed gold — see Atmosphere (3).

Read Me 1 comment

Hello, DALL·E

You may have noticed that my post from the beginning of February was accompanied not by one of my usual photo-illustrations, but by the work of a guest artist named DALL·E. For those who don’t follow tech news, DALL·E is an artificial intelligence (AI) system that’s designed to create images based on a verbal description. For example, you can feed DALL·E a phrase like “A platypus watching TV in the style of Renoir,” and it will give you exactly that, in multiple variations.

As someone who much prefers putting my laundry in a washing machine to scrubbing it against a washboard, I saw no reason not to give DALL·E a try. Why expend time and effort messing around in Photoshop when I can simply type “One person reaching down to help another person get out of a hole”? I wouldn’t say that the resulting illustration is dripping with artistic merit, but neither could that be said about whatever I’d have created.

I have plenty of thoughts about the significance of DALL·E (and its word-oriented cousin, ChatGPT), but so does pretty much every other blogger and columnist in the world, so I’ll spare you mine. I will, however, say a bit about my personal experience in using it.

My approach to making any sort of visual image has always been one of trial and error. I’ll start out with a fuzzy idea of what I want the image to look like, and then look at dozens or even hundreds of online photos and drawings to find elements that match the undeveloped picture in my head. I’ll then use a few of those elements to build a rough composition — often swapping different elements in and out along the way — and gradually refine them until they begin to work together. The more refining I do, the more concrete the image becomes. It’s often not until the image is fully worked out that I realize how bad it is. At that point, I either give up and start over, or go back to the last good decision and work from there. After enough dead ends and new choices, I finally achieve a result that I can live with. That process, as you might imagine, takes hours.

The amazing and humbling thing is that DALL·E follows pretty much the same process[1], but does so in seconds rather than hours. To be honest, “amazing” and “humbling” are not the most accurate words; better ones might be “exasperating” and “infuriating.” How many cumulative months or years of my life have I spent creating half-baked images that just get thrown away? By contrast, DALL·E makes the act of creation seem instant, and even though I know intellectually that it isn’t — that DALL·E is invisibly generating and destroying a greater quantity of valueless images than I could ever imagine — I can’t help perceiving the bulk of human activity as inefficient and futile.

At the same time, I’m made aware of how precious that inefficiency and futility is! After all, I remain driven to put out this blog even though I’m aware that there’s no real reason to do it. I’d probably go on writing it even if I didn’t have any readers. Having had trial collaboration with DALL·E, I’ve still returned to making illustrations on my own, despite the work involved. Humans need purpose, and I can’t imagine any advancement in technology that would obviate that need.

One of the concerns that I’ve seen expressed about AI technology is that it will put many artists and writers — not to mention other creative professionals, such as teachers — out of work. No doubt it will, just as the invention and refinement of sound recording and sound synthesis put a lot of live musicians out of work. Society will adapt, and those who lose their livelihoods will, as they always have, find other ways to make a living.

But, just as the technologization of music hasn’t diminished the amount of music in the world, DALL·E, ChatGPT, and their successors certainly won’t deprive the world of visual and verbal art. Painters need to paint, dancers need to dance, writers need to write, and they will always find opportunities to do it. Which segments of the marketplace will value human-generated creations over machine-generated ones remains to be seen, but no doubt such markets will continue to exist. And in the end, just as they have with music, advancements in AI technology will no doubt broaden our ideas about what qualifies as art and what satisfies our souls.

Not exactly Renoir — but still, not too shabby!

[1] An admittedly oversimplified description of DALL·E’s strategy is that it generates a series of images — millions of them — with random variations, each of which it evaluates according to a set of rules that it has acquired by analyzing existing human-made images. If a newly generated image is judged to fit the rules better than the previous one, it’s used as the basis for further variations; otherwise, it’s thrown away. The eventual result is a group of final images that adhere to the rules as closely as possible.

Read Me 1 comment

Iconoclasm

When I bought my first PC in 1984, the salesperson cautioned me that I wouldn’t be able to use it right away. It was missing an essential component — something called a “disk operating system” — which was required for the computer to do any computing. The operating system (more succinctly known as DOS) came on a floppy disk, which was backordered and expected to arrive in a few days. In the meantime, all I could do was play the computer’s demo disk, designed to show off its capabilities on the showroom floor, over and over and over.

Despite that annoyance, once I learned to use DOS, I quickly became a fan. I loved the simple elegance of the C:\> prompt with the flashing cursor, patiently waiting for me to type in a command. But my romance with DOS was doomed. The same year I bought my PC, Apple introduced the Macintosh, with a graphical user interface that allowed users to do much of their work simply by clicking or dragging with a mouse. I dismissed the Mac as a toy — something that was appealing to beginners, but not suitable for serious work — and assumed that it would be a passing fad. Instead, it was quickly and widely accepted as the model for what a desktop computer ought to be.

My friend Brad, an early advocate of graphical interfaces, urged me to come aboard. He said that the Macintosh’s intuitive way of doing things represented the future of computing.

“On the contrary,” I said, “it’s like going back to the Stone Age. Back then, if you wanted to refer to something, you had to point to it. That was inefficient. That’s why we invented language.”

“But it’s easy to make mistakes when you type in commands,” he insisted.

“And it’s just as easy to make mistakes if you’re not good at handling a mouse,” I said, having had that experience in my experiments with using one.

Once I accepted that change was inevitable, I dodged Microsoft’s weak replacement for DOS, called Windows, by switching to a Macintosh. I’ve had nothing but Macs for 40 years now, and I confess that I’ve come to like using a mouse. But one consequence of that industrywide switch from words to pictures that drives me crazy is the need for everything to be represented by a visual symbol, whether useful or not.

The original Mac icons were simple and clear. For example, in the very  first version of Photoshop, it was easy to grasp that denoted a brush, denoted a pencil, and denoted an eraser. But as Photoshop became more sophisticated and more features were added, providing an immediately recognizable icon became next to impossible.

For example, how would you visually represent the Content-Aware Move Tool (which allows you to move a selection from one place to another, with Photoshop magically repairing the place it was moved from), or the 3D Material Drop Tool (which allows you to sample the surface texture of a 3D object and apply it to another 3D object)? The answers are and respectively, but I wouldn’t have been able to tell you that without looking them up first. I find most icons in current programs to be completely useless, such that I usually have to ignore them entirely and just roll over them to see their names pop up.

At least the Mac interface offers a quick way to see the words that define an icon. That’s not true in other environments that have been gradually overtaken by symbols. For example, I recently found myself driving a rental car with “idiot lights” that were identified solely by icons. The most puzzling was a picture of a car ( ) that suddenly illuminated on the dashboard. What could it possibly have been trying to tell me? I already knew that I was in a car. And frustratingly, there was no way to get any further information without pulling over to look in the owner’s manual.

Relatively certain that I wasn’t in the midst of an emergency, I waited until I got home to look it up, at which point I found out that the icon meant that there was a car in front of me. (I’m so glad that that the manufacturer thought to provide that warning — otherwise, I might have had to, you know, look through the windshield.) But even granting that the alert was useful, wouldn’t it have been even more useful if it consisted of the words “Car ahead”?

I seem to remember that prior to the age of icons, car dashboards used to display the warnings “Check Engine” and “Check Oil.” I don’t know about you, but when I look at the pictures that have supplanted them, I see a meat grinder ( ) and a genie’s lamp ( ). This, I still maintain, is why we invented language.

Read Me 1 comment