4.6 Article

Incorporating Similarity Measures to Optimize Graph Convolutional Neural Networks for Product Recommendation

期刊

APPLIED SCIENCES-BASEL
卷 11, 期 4, 页码 -

出版社

MDPI
DOI: 10.3390/app11041366

关键词

recommendations; probability distribution; graph convolutional neural networks; product recommendation system; Kullback– Leibler (KL) divergence; deep learning

资金

  1. Ministry of Small and Medium-sized Enterprises (SMEs) and Startups (MSS), Korea [S2855401]
  2. Ministry of Health & Welfare (MOHW), Republic of Korea [S2855401] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

This paper proposed a Graph Convolutional Neural Network (GCNN)-based approach for online product recommendation, facing challenges in handling computational complexities and training large datasets.
With the ever-growing amount of online data and information, recommender systems are becoming overwhelmingly popular as an adequate approach for overcoming the challenge of information overload. Artificial Intelligence (AI) and Deep Learning (DL) have accumulated significant interest in many research areas, and recommender systems are one of them. In this paper, a Graph Convolutional Neural Network (GCNN)-based approach was used for online product recommendation. Graph-based methods have undergone substantial consideration for several recommendation tasks, with effective results. However, handling the computational complexities and training large datasets remain a challenge for such a model. Even though they are useful, the excessive measure of the model's boundaries obstructs their applications in real-world recommender frameworks to a great extent. The recursive way of generating neighbor node embeddings for each node in the graph makes it more challenging to train a deep and large GCNN model. Therefore, we propose a model that incorporates measures of similarity between two different nodes, and these similarity measures help us to sample the neighbors beforehand. We estimate the similarity based on their interaction probability distribution with other nodes. We use KL divergence on different probability distributions to find the distance between them. This way, we set a threshold criterion for neighbor selection and generate other clusters. These clusters are then converted to subgraphs and are used as input for the proposed GCNN model. This approach simplifies the task of neighbor sampling for GCNN, and hence, we can observe a significant improvement in the computational complexity of the GCNN model. Finally, we compared the results with those for the previously proposed OpGCN model, basic GCNN model, and other traditional approaches such as collaborative filtering and probabilistic matrix factorization. The experiments showed that the complexity and computational time were decreased by estimating the similarity among nodes and sampling the nodes before training.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据