4.7 Article

Physics-Based Deep Learning for Fiber-Optic Communication Systems

Journal

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Volume 39, Issue 1, Pages 280-294

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2020.3036950

Keywords

Artificial neural networks; Neural networks; Supervised learning; Mathematical model; Complexity theory; Training; Deep learning; Deep neural networks; digital backpropagation; machine learning; nonlinear equalization; nonlinear interference mitigation; physics-based deep learning; split-step method

Funding

  1. European Union [749798]
  2. National Science Foundation (NSF) [1609327]
  3. Marie Curie Actions (MSCA) [749798] Funding Source: Marie Curie Actions (MSCA)
  4. Directorate For Engineering
  5. Div Of Electrical, Commun & Cyber Sys [1609327] Funding Source: National Science Foundation

Ask authors/readers for more resources

A new physics-based machine-learning model is proposed for solving problems in fiber-optic communication systems, efficiently inverting the nonlinear Schrodinger equation. By progressively pruning filter taps, complexity can be significantly reduced.
We propose a new machine-learning approach for fiber-optic communication systems whose signal propagation is governed by the nonlinear Schrodinger equation (NLSE). Our main observation is that the popular split-step method (SSM) for numerically solving the NLSE has essentially the same functional form as a deep multi-layer neural network; in both cases, one alternates linear steps and pointwise nonlinearities. We exploit this connection by parameterizing the SSM and viewing the linear steps as general linear functions, similar to the weight matrices in a neural network. The resulting physics-based machine-learning model has several advantages over black-box function approximators. For example, it allows us to examine and interpret the learned solutions in order to understand why they perform well. As an application, low-complexity nonlinear equalization is considered, where the task is to efficiently invert the NLSE. This is commonly referred to as digital backpropagation (DBP). Rather than employing neural networks, the proposed algorithm, dubbed learned DBP (LDBP), uses the physics-based model with trainable filters in each step and its complexity is reduced by progressively pruning filter taps during gradient descent. Our main finding is that the filters can be pruned to remarkably short lengths-as few as 3 taps/step-without sacrificing performance. As a result, the complexity can be reduced by orders of magnitude in comparison to prior work. By inspecting the filter responses, an additional theoretical justification for the learned parameter configurations is provided. Our work illustrates that combining data-driven optimization with existing domain knowledge can generate new insights into old communications problems.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available