4.7 Article

Photonic Multiply-Accumulate Operations for Neural Networks

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSTQE.2019.2941485

Keywords

Photonics; Neural networks; Program processors; Computational modeling; Deep learning; Training; Metals; Artificial intelligence; neural networks; analog computers; analog processing circuits; optical computing

Funding

  1. National Science Foundation (NSF) [ECCS 1247298, DGE 1148900]

Ask authors/readers for more resources

It has long been known that photonic communication can alleviate the data movement bottlenecks that plague conventional microelectronic processors. More recently, there has also been interest in its capabilities to implement low precision linear operations, such as matrix multiplications, fast and efficiently. We characterize the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations. First, we investigate the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems. Photonic processors are shown to have advantages in the limit of large processor sizes (>100 mu m), large vector sizes (N > 500), and low noise precision (<= 4 bits). We discuss several proposed tunable photonic MAC systems, and provide a concrete comparison between deep learning and photonic hardware using several empiricallyvalidated device and system models. We show significant potential improvements over digital electronics in energy (>10(2)), speed (>10(3)), and compute density (>10(2)).

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available