Mutual-learning sequence-level knowledge distillation for automatic speech recognition

标题
Mutual-learning sequence-level knowledge distillation for automatic speech recognition
作者
关键词
Automatic speech recognition (ASR), Model compression, Knowledge distillation (KD), Mutual learning, Connectionist temporal classification (CTC)
出版物
NEUROCOMPUTING
Volume 428, Issue -, Pages 259-267
出版商
Elsevier BV
发表日期
2020-12-12
DOI
10.1016/j.neucom.2020.11.025

向作者/读者发起求助以获取更多资源

Reprint

联系作者

Find the ideal target journal for your manuscript

Explore over 38,000 international journals covering a vast array of academic fields.

Search

Become a Peeref-certified reviewer

The Peeref Institute provides free reviewer training that teaches the core competencies of the academic peer review process.

Get Started