AudioCHI 2022
Workshop on Audio Collection Human Interaction
Organised in conjunction with the ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR 2022) online, on Monday 14 March, 2022
Aims
The AudioCHI 2022 workshop was a half-day workshop to focus on human engagement with spoken material in search settings, including live stream audio and collections. This moves beyond spoken content retrieval to examine interaction with this content in the search process.
Challenge questions
- Use cases --- what is the parameter space of use cases for search, retrieval, exploration of speech audio?
- Features --- What are some interesting audio features we do not pay enough attention to?
- Shared tasks --- Can we suggest some new shared tasks that would span the above space interestingly?
Program and Schedule
The program was scheduled for a half day to fit European afternoon and US East Coast morning working hours and included two presentations and an open discussion session.
- 1400 CET / 9am EST: Intro and Overview: initial discussion of challenge questions
- 1500 CET / 10am EST: Presentation: Moreno La Quatra "Bi-modal Architectures for Deeper User Preference Understanding from Spoken Content"
- 1530 CET / 10:30am EST: coffee break
- 1600 CET / 11am EST: Keynote: Doug Oard "Talking with the Planet"
- 1700 CET / noon EST: Discussion and write up of challenge questions
- 1800 CET / 1pm EST: close
Interacting with Spoken Content Collections
Spoken material comes in many forms, including for example: factual or entertaining (or both!), current or historical interests, local or global, single speaker or multi-person dialogues. Users engage with spoken material for a variety of reasons, including entertainment, current affairs, education, and research.
While there has been considerable previous work studying spoken document retrieval or more generally spoken content retrieval, AudioCHI 2022 will be the first meeting to explore user engagement with audio content, including the use of extracted verbal and non-verbal features for creation of rich content representations. AudioCHI will explore human factors in interaction with spoken audio content, associated existing work in the field of spoken content retrieval. The workshop seeks to bring together researchers in spoken content retrieval with researchers in interactive information retrieval and researchers interested in engagement with speech data, to examine opportunities and challenges for advancing the technologies for speech search and interaction with spoken content.
Organisers
- Gareth J. F. Jones, Dublin City University
- Maria Eskevich, CLARIN ERIC
- Ben Carterette, Joana Correia, Rosie Jones, Jussi Karlgren, Spotify
- Ian Soboroff, National Institute of Standards and Technology, United States
Contact
audio-chi@googlegroups.com