4.6 Article

Incorporating Similarity Measures to Optimize Graph Convolutional Neural Networks for Product Recommendation

Journal

APPLIED SCIENCES-BASEL
Volume 11, Issue 4, Pages -

Publisher

MDPI
DOI: 10.3390/app11041366

Keywords

recommendations; probability distribution; graph convolutional neural networks; product recommendation system; Kullback– Leibler (KL) divergence; deep learning

Funding

  1. Ministry of Small and Medium-sized Enterprises (SMEs) and Startups (MSS), Korea [S2855401]
  2. Ministry of Health & Welfare (MOHW), Republic of Korea [S2855401] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

Ask authors/readers for more resources

This paper proposed a Graph Convolutional Neural Network (GCNN)-based approach for online product recommendation, facing challenges in handling computational complexities and training large datasets.
With the ever-growing amount of online data and information, recommender systems are becoming overwhelmingly popular as an adequate approach for overcoming the challenge of information overload. Artificial Intelligence (AI) and Deep Learning (DL) have accumulated significant interest in many research areas, and recommender systems are one of them. In this paper, a Graph Convolutional Neural Network (GCNN)-based approach was used for online product recommendation. Graph-based methods have undergone substantial consideration for several recommendation tasks, with effective results. However, handling the computational complexities and training large datasets remain a challenge for such a model. Even though they are useful, the excessive measure of the model's boundaries obstructs their applications in real-world recommender frameworks to a great extent. The recursive way of generating neighbor node embeddings for each node in the graph makes it more challenging to train a deep and large GCNN model. Therefore, we propose a model that incorporates measures of similarity between two different nodes, and these similarity measures help us to sample the neighbors beforehand. We estimate the similarity based on their interaction probability distribution with other nodes. We use KL divergence on different probability distributions to find the distance between them. This way, we set a threshold criterion for neighbor selection and generate other clusters. These clusters are then converted to subgraphs and are used as input for the proposed GCNN model. This approach simplifies the task of neighbor sampling for GCNN, and hence, we can observe a significant improvement in the computational complexity of the GCNN model. Finally, we compared the results with those for the previously proposed OpGCN model, basic GCNN model, and other traditional approaches such as collaborative filtering and probabilistic matrix factorization. The experiments showed that the complexity and computational time were decreased by estimating the similarity among nodes and sampling the nodes before training.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available