25 次浏览 · 16 次下载 · ☆☆☆☆☆ 0.0

Parallel and Distributed Machine Learning

发表日期 May 15, 2024 (DOI: https://doi.org/10.54985/peeref.2405p1620685)

未经同行评议

作者

Kashish Agarwal1 , Aditya Bhat1 , Atharva Deshpande1 , Jayesh Bhave1 , Fatima Inamdar1
  1. Vishwakarma Institute of Information Technology

会议/活动

IJSAE, December 2023 (虚拟会议)

海报摘要

This poster outlines the difficulties, advantages, and distinctions between distributed and parallel machine learning (ML), navigating the complex field. It breaks down the difficulties in synchronisation, fault tolerance, and communication overhead that arise when parallelizing and spreading machine learning activities. On the other hand, it highlights the variety of advantages these methods provide, such as improved scalability, faster training rates, and the capacity to handle large datasets. Additionally, the poster clarifies the differences between distributed and parallel machine learning, explaining how distribution uses several machines in a network, whilst parallelism makes use of multiple resources on a single machine. The poster attempts to provide academics and practitioners with a comprehensive grasp of the intricacies surrounding parallel and distributed machine learning through lucid illustrations and succinct explanations, enabling well-informed decision-making.

关键词

Parallel, Distributed, Data parallelism, Model paralellism, High Performance Computing

研究领域

Computer and Information Science , Systems Science

参考文献

  1. Parallel stochastic gradient descent for deep learning: A survey" by J. Dean et al. (2012)

基金

暂无数据

补充材料

暂无数据

附加信息

利益冲突
No competing interests were disclosed.
数据可用性声明
Data sharing not applicable to this poster as no datasets were generated or analyzed during the current study.
知识共享许可协议
Copyright © 2024 Agarwal et al. This is an open access work distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
评分
引用
Agarwal, K., Bhat, A., Deshpande, A., Bhave, J., Inamdar, F. Parallel and Distributed Machine Learning [not peer reviewed]. Peeref 2024 (poster).
复制引文

Create your own webinar

Interested in hosting your own webinar? Check the schedule and propose your idea to the Peeref Content Team.

Create Now

Ask a Question. Answer a Question.

Quickly pose questions to the entire community. Debate answers and get clarity on the most important issues facing researchers.

Get Started