Developing methods that are able to handle multiple simultaneous speakers represents a major challenge for researchers in many fields of speech technology and speech science, for example, in speech enhancement, auditory modelling and machine listening or speaking. Significant research activity has occurred in many of these fields in recent years and great advances have been made, but often in a siloed manner.
This cross-disciplinary special session will bring together researchers from across the whole field to present and discuss their latest research on multi-talker methods, encouraging a sharing of ideas and fertilising future collaboration.
There are a great many unsolved research questions in the area that we believe will be best solved by fostering such new relationships. To give just a few examples:
This session won’t solve these problems, but might bring together researchers who can.
We welcome submissions on many different topics, including, but not limited to:
We anticipate that after a short introductory talk, the session will be entirely poster-based to give maximum opportunity for cross-disciplinary networking.
Papers submitted to the session should follow the regular Interspeech paper guidelines, submission and review process. Accepted papers will appear in the main proceedings and the ISCA archive. Be sure to list “Multi-talker methods in speech processing” as your paper subject area when making a submission.
Papers must be submitted by 1 March 2023, but updates are permitted up until 8 March 2023.
Peter Bell, University of Edinburgh, UK
Michael Akeroyd, University of Nottingham, UK
Jon Barker, University of Sheffield, UK
Marc Delcroix, NTT, Japan
Liang Lu, Otter.ai, USA
Jonathan Le Roux, MERL, USA
Jinyu Li, Microsoft, USA
Cassia Valentini, University of Edinburgh, UK
DeLiang Wang, Ohio State University, USA