I am a member of the personal music player generation, moving over the years from the positively primitive Walkman to my cutting-edge 60 GB video iPod. (A message to would-be thieves: you will pry that iPod from my cold dead fingers, buddy.) I spent 14 years in New York City, one of the loudest places on earth. Add in the noisy step and aerobic classes at the gym over countless years, and it's not surprising that I've begun to experience some minor hearing difficulties, albeit in very specific frequency ranges. In particular, I have difficulty making out human speech in very noisy ambient environments. I can manage, but I really have to concentrate, peppering the conversation with "I'm sorry? What did you say?" It must be so annoying to those with whom I am attempting to converse.
The ability to distinguish between and separate various auditory streams is one of the more impressive accomplishments of the human brain. Walk into a crowded bar, with music blaring, and your first impression is likely to be a shudder at the sudden wall of sound -- which you will interpret at first as a single loud noise. But very quickly, you adjust, and different sounds begin to emerge. We learn to pick out certain aural streams by focusing on a particular speaker or conversation, or the melody being played by the live band (or perhaps a particularly inventive bass line).
Shihab Shamma, a researcher at the University of Maryland, College Park, believes this ability arises from auditory nerve cells in the brain that re-tune themselves to specific sounds as part of the adaptive process. It's kind of an auditory feedback loop that enables us to sort out confusing incoming acoustical stimuli. He's surprised, however, by how quickly this process happens: auditory neurons in adult mammal brains make the adjustment in a few seconds. To Shamma, this suggests that the developed brain is even more "plastic" or adaptable than previously realized. We're literally changing our minds.
Getting back to that bar-room scene: we navigate by tuning our neurons to specific voices, thereby tuning out others -- like that irritating, leering would-be Lothario at the other end of the bar, or all that ambient noise. Or perhaps we align out sensitivity to the acoustical characteristics of that throbbing bass guitar riff. Scientists are still a bit in the dark in terms of understanding the mechanisms that cause this rapid tuning, but Shamma says that if we can mimic those abilities, it could lead to the development of more effective hearing aids and cochlear implants. At the rate my hearing seems to be deteriorating, let's hope that will happen before I'm in my dodderage. In the shorter term, it might help improve automatic speech recognition systems by teaching them to filter out moderate levels of background noise and other acoustical "clutter."
Communication (whether at a cocktail party or otherwise) is so important, especially across cultures. Unfortunately, certain things just don't translate very well: words, slang, specific idioms, and sometimes even sounds. (That's one reason why fans of Hong Kong cinema, like Jen-Luc Piquant, delight in the cracked English subtitles that frequently appear. Consider this gem from The Naked Killers, uttered after a male character experienced an extremely painful groin injury involving a sharp fountain pen: "I lost one ball of mine!") For instance, when native English speakers listen to the ticking of a clock, they hear a "tick-tock" sound, whereas a native Japanese speaker listening to the exact same timekeeping device will hear "tock-tick." Take a spectrograph of the sound and it will be identical, so how can this be? According to a US-Japanese team of researchers, the phenomenon occurs because people naturally group the individual sounds they hear into larger units, organized by rhythm.
The prior assumption had been that speakers of all languages grouped their sounds into short-long patterns (like tick-tock), when in fact, Japanese speakers tend to group sounds into long-short patterns. John Iversen of The Neurosciences Institute in San Diego thinks one possible explanation might be the influence of native languages: English has a lot of examples of short-long rhythms ("the dog," "to eat"); that's why iambic pentameter is the most common meter in English verse. Japanese has many long-short patterns ("mari-ga," "hon-wo"). So one's native language can influence the most basic aspects of sound perception. It's just a hypothesis at this point, but Iversen and cohorts plan to test it by predicting (and then, one assumes, experimentally testing) how speakers of other languages hear music and other non-linguistic rhythms. You can read about the study in greater detail, and hear nifty sound samples, here.
This might help you navigate the minefield of cross-cultural conversation at a party, but what if you just can't think of anything to talk about? I picked up some great cocktail-party-chatter tidbits during Wednesday's afternoon session on musical acoustics, involving similarities and differences among various stringed instruments in Asian cultures. True, my notes are a bit sketchy -- there was a pronounced communication barrier, since several speakers spoke English as a second language, and were in fact substituting for the actual invited speakers, and thus weren't as familiar with the Power Point presentations they were using.
Nonetheless, I learned a new word -- "sawari" (literal translation: "mountain top") -- which is a device used on Chinese (and many Japanese) stringed (or plucked) lutes to generate reverberation. Basically, it's a raised bamboo plate attached to the surface of the instrument, usually near the top fret, that arose from a practical need to produce a longer sound and slightly higher frequency to meet the requirements of ancient Chinese performance spaces. Initially only the first string passed over it, but in later versions, all the strings passed over the device to enhance the reverb.
There were pictures of stunning ancient instruments, one memorably made out of a resonant hardwood and python skin, a bit like the sanxian pictured. (Check out this nifty MP3 sample to hear what it sounds like.) The session ended with a half-hour mini-concert of traditional Japanese music played on period instruments. I also learned that the Indian sitar has a similar device (a "jawari") to create a buzzing reverb and optimize tonal quality. Unlike the Chinese version, it is generally located on the top plate of the body, rather than the top fret of the neck.
Whatever you choose to talk about at this weekend's various fetes, tonight, at least, let there be some acknowledgment of AIDS Awareness Day. (Also, Happy Blogoversary to Angela Gunn over at USA Today's TechSpace.) It's tradition for blogs to go dark on this day, but I think the loved ones we've lost over the years might be better served by a celebration of life. Certainly that's what my late beloved pal, Nick Bucci, would have wanted. He reveled in life's many pleasures more than anyone I've ever met, from fine food and wine, great books, and theater, to a closet full of Armani clothing and silk shirts. Four months before he died, I took him to Christmas dinner at a friend's house, where (despite being terminally ill and positively skeletal) he had the entire party laughing so hard with outrageously ribald stories they were spewing wine through their nostrils. So Nick -- this one's for you. I still miss you.