4.7 Article

Similarity Assessment Model for Chinese Sign Language Videos

Journal

IEEE TRANSACTIONS ON MULTIMEDIA
Volume 16, Issue 3, Pages 751-761

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TMM.2014.2298382

Keywords

Chinese sign language video; human visual system (HVS); sign language semantic; video similarity assessment

Funding

  1. Natural Science Foundation of China [61227004, 61170104, 61370119, 61033004, U0935004, 61133003]
  2. Beijing Natural Science Foundation [4112008]

Ask authors/readers for more resources

This paper proposes a model for measuring similarity between videos which content is Chinese Sign Language (CSL), vision and sign language semantic are considered for the model. Vision component of the model is distance based on Volume Local Binary Patterns (VLBP), which is robust for motion and illumination. Semantic component of the model computes semantic distance based on definition of sign language semantic, which is defined as hand shape, location, orientation and movements. While quantizing the sign language semantic, contour is used to measure shape and orientation; trajectory is used for measuring location andmovement. Experiment results show that proposed assessment model is effective and assessing result given by the model is close to subjective scoring.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available