HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
Published 2021 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
Authors
Keywords
-
Journal
ACM Transactions on Multimedia Computing Communications and Applications
Volume 17, Issue 1s, Pages 1-22
Publisher
Association for Computing Machinery (ACM)
Online
2021-04-27
DOI
10.1145/3412847
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- Cryptanalysis and enhancement of an image encryption scheme based on a 1-D coupled Sine map
- (2020) Yu Liu et al. NONLINEAR DYNAMICS
- Deep Coattention-Based Comparator for Relative Representation Learning in Person Re-Identification
- (2020) Lin Wu et al. IEEE Transactions on Neural Networks and Learning Systems
- A novel image encryption algorithm based on bit-plane matrix rotation and hyper chaotic systems
- (2019) Cong Xu et al. MULTIMEDIA TOOLS AND APPLICATIONS
- Cross-modal recipe retrieval via parallel- and cross-attention networks learning
- (2019) Da Cao et al. KNOWLEDGE-BASED SYSTEMS
- Triplet-Based Deep Hashing Network for Cross-Modal Retrieval
- (2018) Cheng Deng et al. IEEE TRANSACTIONS ON IMAGE PROCESSING
- CCL: Cross-modal Correlation Learning With Multigrained Fusion by Hierarchical Network
- (2018) Yuxin Peng et al. IEEE TRANSACTIONS ON MULTIMEDIA
- Multiview Spectral Clustering via Structured Low-Rank Matrix Factorization
- (2018) Yang Wang et al. IEEE Transactions on Neural Networks and Learning Systems
- Deep Binary Reconstruction for Cross-modal Hashing
- (2018) Di Hu et al. IEEE TRANSACTIONS ON MULTIMEDIA
- Cycle-Consistent Deep Generative Hashing for Cross-Modal Retrieval
- (2018) Lin Wu et al. IEEE TRANSACTIONS ON IMAGE PROCESSING
- Where-and-When to Look: Deep Siamese Attention Networks for Video-Based Person Re-Identification
- (2018) Lin Wu et al. IEEE TRANSACTIONS ON MULTIMEDIA
- SCH-GAN: Semi-Supervised Cross-Modal Hashing by Generative Adversarial Network
- (2018) Jian Zhang et al. IEEE Transactions on Cybernetics
- MHTN: Modal-Adversarial Hybrid Transfer Network for Cross-Modal Retrieval
- (2018) Xin Huang et al. IEEE Transactions on Cybernetics
- Effective Multi-Query Expansions: Collaborative Deep Networks for Robust Landmark Retrieval
- (2017) Yang Wang et al. IEEE TRANSACTIONS ON IMAGE PROCESSING
- Semi-supervised semantic factorization hashing for fast cross-modal retrieval
- (2017) Jiale Wang et al. MULTIMEDIA TOOLS AND APPLICATIONS
- Cross-View Retrieval via Probability-Based Semantics-Preserving Hashing
- (2017) Zijia Lin et al. IEEE Transactions on Cybernetics
- Semantic Boosting Cross-Modal Hashing for efficient multimedia retrieval
- (2016) Ke Wang et al. INFORMATION SCIENCES
- Robust hashing for image authentication using SIFT feature and quaternion Zernike moments
- (2016) Junlin Ouyang et al. MULTIMEDIA TOOLS AND APPLICATIONS
- Robust Image Hashing Using Radon Transform and Invariant Features
- (2016) Y. L. Liu et al. Radioengineering
- Learning Compact Hash Codes for Multimodal Representations Using Orthogonal Deep Structure
- (2015) Daixin Wang et al. IEEE TRANSACTIONS ON MULTIMEDIA
- Semi-supervised multi-graph hashing for scalable similarity search
- (2014) Jian Cheng et al. COMPUTER VISION AND IMAGE UNDERSTANDING
- Learning Cross-Media Joint Representation With Sparse and Semisupervised Regularization
- (2013) Xiaohua Zhai et al. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
- Sparse Multi-Modal Hashing
- (2013) Fei Wu et al. IEEE TRANSACTIONS ON MULTIMEDIA
- On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval
- (2013) Jose Costa Pereira et al. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Become a Peeref-certified reviewer
The Peeref Institute provides free reviewer training that teaches the core competencies of the academic peer review process.
Get StartedAsk a Question. Answer a Question.
Quickly pose questions to the entire community. Debate answers and get clarity on the most important issues facing researchers.
Get Started