Journal
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY
Volume 12, Issue 3, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3450946
Keywords
Knowledge discovery; deep learning; self-supervised learning
Ask authors/readers for more resources
The study introduces three variations of vector-quantization-based topic models and demonstrates their superior performance compared to baseline models through empirical research. Furthermore, a case study highlights the powerful capabilities of these models in downstream tasks.
With the purpose of learning and utilizing explicit and dense topic embeddings, we propose three variations of novel vector-quantization-based topicmodels (VQ-TMs): (1) Hard VQ-TM, (2) Soft VQ-TM, and (3) Multi-View Soft VQ-TM. The model family capitalize on vector quantization techniques, embedded input documents, and viewing words as mixtures of topics. Guided by a comprehensive set of evaluation metrics, we conduct systematic quantitative and qualitative empirical studies, and demonstrate the superior performance of VQTMs compared to important baseline models. Through a unique case study on code generation from natural language descriptions, we further illustrate the power of VQ-TMs in downstream tasks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available