IUBio GIL .. BIOSCI/Bionet News .. Biosequences .. Software .. FTP

Q: Model for perceived sound amplitude as fn of angle?

Dan Fain fain at asterix.etho.caltech.edu
Mon Apr 17 23:57:54 EST 1995


An authoritative book on this subject is _Spatial Hearing_ by Jens
Blauert [English translation, MIT Press; out of print].

In article <3maa44$4v3 at news.duke.edu> lorax at acpub.duke.edu (Jeff Brent) writes:

>    I'm programming stereo sound software with the goal of making sounds
> sound like they're coming from different positions in space from the
> listener.  Though I know there are other contributing factors to our
> perception of where a sound is located, to _start_ with, I want to
> consider  the relative amplitudes received in each ear, as a function
> of the 360 degrees around the listener.  (I don't know if there's any
> way to simulate height...) Also, though I'm not sure I can be this

The acoustics of the outer ear (pinna) appear to filter sound in a way
that conveys information about elevation (through the HRTF, or
head-related transfer function).  This is generally thought to
introduce peaks and notches in the power spectrum of the sound, though
there are also time-domain theories of the perceptual process.  Since
this outer ear effect is well esablished, commercial systems (see
below) generally try to approximate its effects using digital signal
processing (DSP) hardware.

Currently the leading way to simulate an anechoic (see below)
environment through headphones is to filter sounds with the HRTF for
the corresponding elevation, azimuth, and distance.  This is much more
expensive computationally than the following frequency-independent
approach.  The HRTF of a KEMAR mannequin is available for free from
MIT (World-Wide Web, http://sound.media.mit.edu/KEMAR.html).

> exact, the slight delay in the farthest ear.  I think I have seen some
> equations for this sound placement thing; it seems that it would not be
> simple geometry, since one's face shields sound coming from a side....

The interaural time differences (ITD) dominate human auditory
localization in the azimuth (horizontal plane) for sounds under 1KHz.
Above 1KHz level differences (ILD) are more important.  When these
cues are treated as frequency-independent it becomes fairly difficult
to achieve *externalization* (the perception that sound sources are
located outside the head), but there is usually a strong sense of
lateralization.

A simple approximation to the ITD [von Hornbostel, Wertheimer 1920] is
simply:

	ITD = (d sin phi) / c

	d: 21 cm (NOT the ear-to-ear separation--empirically
		determined)

	phi: angle of incidence

	c: speed of sound

Even assuming a spherical head [e.g. see A. Moller; _Auditory
Physiology_], one must worry about diffraction of the sound waves (as
you point out, the head shadowing does not make sounds silent on the
opposite side of the head).  It's also important to distinguish
between anechoic environments, where sound propagates directly from
sources to receivers, and the real world, with echoes, resonances,
reverberation, and so on.

As several other respondents mentioned, there are some commercial
systems available now for spatial sound synthesis, marketed by Crystal
River Engineering (CRE) <toni at cre.com> and Tucker-Davis
<quikki at nervm.nerdc.ufl.edu>.  I think CRE has a relatively low-cost
software system (Beachtron) which works on the Turtle Beach sound card
for IBM PC clones.  Their high-end system, the Convolvotron, has been
used at NASA Ames in perceptual experiments for a while.

Somebody mentioned "3D sound" as a starting point for a literature
search.  Some other keywords are "virtual audio", "spatial hearing",
"interaural [level/intensity/time/phase] difference", "head-related
transfer function."

-----------------------------------------------------------------------
Dan Fain
Computation and Neural Systems
Caltech
-----------------------------------------------------------------------



More information about the Audiolog mailing list

Send comments to us at archive@iubioarchive.bio.net