AudioGuide is a program for concatenative synthesis that I'm currently developing with Norbert Schnell, Philippe Esling and Diemo Schwarz. Work began in 2010 at IRCAM when I was composer in residence for musical research. Written in python, it analyzes databases of sound segments and arranges them to follow a target sound according to audio descriptors. The program outputs soundfile event lists that can be either synthesized (in csound or Max MSP/Pure Data) or translated into symbolic musical notation.
AudioGuide differs from other programs for concatenative synthesis in three important ways.
Database samples are matched to the target sound based upon time-varying descriptors, thus permitting similarity to account for sounds' morphology. This allows for longer chucks of the database to be matched, enabling concatenated outputs which used whole acoustic sound segments rather that windowed grains.
The user may include, exclude and/or weight various audio descriptors to achieve different similarity contexts (rejecting the idea that a context-independent notion of similarity exists), such that a single target and a single database may be used to create a vast array of `variations' in likeness and semblance.
Samples can be selected simultaneously (either vertically or hoizontally overlapping) to contribute to the representation of the target's characteristics. Therefore, several corpus samples might can be used to approximate different aspects of the target's energy. In addition, database samples can layered to better approximate the entirety of a target sound, if so desired. To hear these results, look at the simultaneous selection examples on the examples page.