Auditory Scene Analysis: Computational Models
Authors
Book Title
Editors
Year
Pages
DOI
Publisher
Links
Listeners have to make sense of a complex acoustic world containing overlapping sound sources which must be organised into individual auditory objects. Computational auditory scene analysis concerns the use of algorithms inspired by human sound perception whose aim is to extract properties of constituent sound sources in a complex mixture. Starting with representations based on models of how sound is processed in the peripheral auditory system, typical computational auditory scene analysis techniques function by decomposing the mixture into components followed by selective recomposition into groups of components which appear to emanate from a single source. Grouping processes can be informed by information in the signal itself or by the use of prior statistical models of sound sources. This chapter outlines some of the principal signal decompositions used in models of auditory grouping and goes on to describe a decoder which combines both signal- and model-driven grouping processes.