Ultraäänet ja äänen laadun kokeminen

Päivitetty 26.6.2000

Mikko Mattila

Ulträänet vaikuttavat äänenlaadun kokemiseen

Ulträänien merkitystä äänenlaatuun vaikuttavana tekijänä on pitkään vähtelty. On uskottu, että ultraäänet eivät voi vaikuttaa koettuun kuuloaistimukseen, koska yksittäisiä ultraäänisignaaleja ei normaalihenkilö pysty erottamaan.

  • Oohashi ym:n tutkimuksen mukaan (AES preprint 3207) yli 26 kHz signaalin vaikutus voidaan todeta aivon alfa-EEG:n rytmissä ja korkeataajuisen ärsykkeen jälkeen vaikutus kestää jonkin aikaa. Varsinaisen ultraäänen kuulemista kuuntelija ei kuitenkaan suoraan havainnoinut, mutta ultraäänen sisältyessä musiikkisignaaliin, ero havaittiin.

    Oheinen James Boykin tutkimuspaperissa on referoitu Oohashin tutkimuksesta, jossa kerrotaan että ultraäänillä on merkitystä äänen laadun kokemisessa:

    Given the existence of musical-instrument energy above 20 kilohertz, it is natural to ask whether the energy matters to human perception or music recording. The common view is that energy above 20 kHz does not matter, but AES preprint 3207 by Oohashi et al. claims that reproduced sound above 26 kHz "induces activation of alpha-EEG (electroencephalogram) rhythms that persist in the absence of high frequency stimulation, and can affect perception of sound quality." [4]
          Oohashi and his colleagues recorded gamelan to a bandwidth of 60 kHz, and played back the recording to listeners through a speaker system with an extra tweeter for the range above 26 kHz. This tweeter was driven by its own amplifier, and the 26 kHz electronic crossover before the amplifier used steep filters. The experimenters found that the listeners' EEGs and their subjective ratings of the sound quality were affected by whether this "ultra-tweeter" was on or off, even though the listeners explicitly denied that the reproduced sound was affected by the ultra-tweeter, and also denied, when presented with the ultrasonics alone, that any sound at all was being played.
          From the fact that changes in subjects' EEGs "persist in the absence of high frequency stimulation," Oohashi and his colleagues infer that in audio comparisons, a substantial silent period is required between successive samples to avoid the second evaluation's being corrupted by "hangover" of reaction to the first.
          The preprint gives photos of EEG results for only three of sixteen subjects. I hope that more will be published.

  • Lisänäyttöä korkearesoluutioisen laajakaistaisen äänialueen merkityksestä on saatu dCS:n tutkimuksesssa. Ohessa Stereophilen referointi AES:n 109. Seminaarista 22-25.9.2000:

    Chaired by Malcolm O.J. Hawksford of the University of Essex, the panel of experts discussed several obstacles to achieving higher levels of resolution in audio recording and playback. Mike Story, of dCS, Ltd., made a strong case for promoting ultra-wide audio bandwidth as a hi-rez standard—perhaps as much as 100kHz. While measurable hearing in normal adults rarely extends beyond 16kHz, many experiments have demonstrated that acoustic energy above this range can have a pronounced effect on the perceived realism of reproduced music. Story mentioned that dCS has conducted some very-well-controlled studies in which acoustic energy remains constant out to 30kHz while the energy in the 30–90kHz band varies. "The degree of focus and localization correlates quite well with the amount of energy in this band," Story mentioned. "Current theory is inadequate," he said of the standard engineering belief that a 20Hz–20kHz frequency response is all that is necessary for quality playback.

    Pioneer Corporation's Takeo Yamamoto agreed that bandwidth affects "the perceived depth of an acoustic image." A fascinating discussion ensued in which an alternate model of human hearing was presented as a possible explanation for the reason high-resolution audio sounds better. Instead of simply detecting tones, the hearing system might also detect impulses, or "clicks"—localization cues that arrive at the ears within a 10-microsecond window. These impulses necessarily lie above the bandwidth for tones—in the energy band studied by Story and his dCS colleagues. In the wild, hearing is the body's "early warning system," as one panelist put it, and the "wideband target locator" hypothesis might explain why high-resolution audio sounds better—because it gets the cues right.

    The human brain is "a most amazing pattern-recognition processor," said legendary loudspeaker and phase-relations researcher Siegfried Linkwitz. (The Linkwitz-Riley crossover formulation bears his name.) "Minimizing false cues should be one of our primary objectives," he said, drawing attention to the fact that a live trumpet played down the hall and around the corner is immediately perceived for what it is, and that very few people, expert listeners or not, would mistake a recording of the same instrument for the real thing.

  • Äänen kuulemiseen saattaa vaikuttaa ultraäänten intermodulaatioon, jonka erotustaajuus voi sattua kuuloalueelle. Ei ole aivan selvää, syntyvätkö tälläiset intermodulaatotulokset ilmasa vai vasta ihmisen korvassa. Ilma toimii hyvin lineaarisesti kunnes lähetystään äänennopeutta tai signaaleita jotka ovat voimakkuudeltaan ilmanpaineen luokkaa.
  • Suunnattujen voimakkaiden ultraäänisignaalien intermodulaatiotuloksien käyttämistä musiikin toistoon tutkitaan. Lisätietoa tästä löytyy esimerkiksi osoitteesta http://www.atcsd.com/HTML/hss.htm
  • Eräiden luonnonkansojen edustajat (eivät ole mm. altistuneet nykypäivän melulla) pystyvät kuulemaan ääniä jopa 25 kHz taajuuksiin saakka.
  • Joidenkin voimakkaita pulssittaisia ultaääni käyttävien laitteiden (mm. jotkin etäisyysmittauslaitteet) synnyttämät signaalit on kuultavissa korvin. Näiden laitteiden kanssa en tosin voi aina sanoa kuullaanko tässä ultraääni vai laitteen synnyttämä joku muu ääni.
  • Voimakkaita ultraääniä voidaan käyttää kidutusvälineenä.

Miten ulträänet johtuvat kuuloelimiin

Ultraäänet eivät välttämättä muutu normaalilla tapaa korvassa kultavaksi ääneneksi, mutta saatavat muuttua aistittavaksi ärsykkeiksi kuuloluissa. Oheinen ote James Boykin tutkimuspaperista:

In a paper published in Science, Lenhardt et al. report that "bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects." [5] They speculate that the saccule may be involved, this being "an otolithic organ that responds to acceleration and gravity and may be responsible for transduction of sound after destruction of the cochlea," and they further point out that the saccule has neural cross-connections with the cochlea. [6]

Even if we assume that air-conducted ultrasound does not affect direct perception of live sound, it might still affect us indirectly through interfering with the recording process. Every recording engineer knows that speech sibilants (Figure 10), jangling key rings (Figure 15), and muted trumpets (Figures 1 to 3) can expose problems in recording equipment. If the problems come from energy below 20 kHz, then the recording engineer simply needs better equipment. But if the problems prove to come from the energy beyond 20 kHz, then what's needed is either filtering, which is difficult to carry out without sonically harmful side effects; or wider bandwidth in the entire recording chain, including the storage medium; or a combination of the two.
      On the other hand, if the assumption of the previous paragraph be wrong — if it is determined that sound components beyond 20 kHz do matter to human musical perception and pleasure — then for highest fidelity, the option of filtering would have to be rejected, and recording chains and storage media of wider bandwidth would be needed.

 

Lähteet
http://www.cco.caltech.edu/~boyk/spectra/spectra.htm
http://www.hut.fi/Misc/Electronics/faq/sfnet.harrastus.audio+video/
http://jn.physiology.org/cgi/content/abstract/83/6/3548

 

 

 

All Rights Reserved
© 2000-2006 highendnews.com