4.6 Article

A multiscale finite volume method for Maxwell's equations at low frequencies

Journal

GEOPHYSICAL JOURNAL INTERNATIONAL
Volume 199, Issue 2, Pages 1268-1277

Publisher

OXFORD UNIV PRESS
DOI: 10.1093/gji/ggu268

Keywords

Numerical solutions; Numerical approximations and analysis; Electromagnetic theory

Ask authors/readers for more resources

Simulating electromagnetic fields in the quasi-static regime by solving Maxwell's equations is a central task in many geophysical applications. In most cases, geophysical targets of interest exhibit complex topography and bathymetry as well as layers and faults. Capturing these effects with a sufficient level of detail is a huge challenge for numerical simulations. Standard techniques require a very fine discretization that can result in an impracticably large linear system to be solved. A remedy is to use locally refined and adaptive meshes, however, the potential coarsening is limited in the presence of highly heterogeneous and anisotropic conductivities. In this paper, we discuss the application of multiscale finite volume (MSFV) methods to Maxwell's equations in frequency domain. Given a partition of the fine mesh into a coarse mesh the idea is to obtain coarse-to-fine interpolation by solving local versions of Maxwell's equations on each coarsened grid cell. By construction, the interpolation accounts for fine scale conductivity changes, yields a natural homogenization, and reduces the fine mesh problem dramatically in size. To improve the accuracy for singular sources, we use an irregular coarsening strategy. We show that using MSFV methods we can simulate electromagnetic fields with reasonable accuracy in a fraction of the time as compared to state-of-the-art solvers for the fine mesh problem, especially when considering parallel platforms.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

Article Computer Science, Artificial Intelligence

Deep Neural Networks Motivated by Partial Differential Equations

Lars Ruthotto, Eldad Haber

JOURNAL OF MATHEMATICAL IMAGING AND VISION (2020)

Article Multidisciplinary Sciences

A machine learning framework for solving high-dimensional mean field game and mean field control problems

Lars Ruthotto, Stanley J. Osher, Wuchen Li, Levon Nurbekyan, Samy Wu Fung

PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA (2020)

Article Engineering, Electrical & Electronic

LeanConvNets: Low-Cost Yet Effective Convolutional Neural Networks

Jonathan Ephrath, Moshe Eliasof, Lars Ruthotto, Eldad Haber, Eran Treister

IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING (2020)

Article Engineering, Multidisciplinary

Inversion of convection-diffusion equation with discrete sources

Meenarli Sharma, Mirko Hahn, Sven Leyffer, Lars Ruthotto, Bart van Bloemen Waanders

Summary: This study introduces a convection-diffusion inverse problem that aims to identify an unknown number of sources and their locations. It is shown that the problem can be formulated as a large-scale mixed-integer nonlinear optimization problem, which current state-of-the-art solvers cannot solve. Two new rounding heuristics are developed to tackle the issue, along with a steepest-descent improvement heuristic to obtain satisfactory solutions for both two- and three-dimensional inverse problems. The code used in the numerical experiments is also provided in open-source format.

OPTIMIZATION AND ENGINEERING (2021)

Editorial Material Mathematics, Applied

Connections between deep learning and partial differential equations

M. Burger, W. E, L. Ruthotto, S. J. . Osher

EUROPEAN JOURNAL OF APPLIED MATHEMATICS (2021)

Article Mathematics, Applied

PNKH-B: A PROJECTED NEWTON-KRYLOV METHOD FOR LARGE-SCALE BOUND-CONSTRAINED OPTIMIZATION

Kelvin Kan, Samy Wu Fung, Lars Ruthotto

Summary: PNKH-B is a projected Newton-Krylov method designed for solving large-scale optimization problems with bound constraints, particularly in scenarios where function and gradient evaluations are expensive and the (approximate) Hessian is only available through matrix-vector products. By utilizing a low-rank approximation of the Hessian to determine the search direction and construct the metric, PNKH-B achieves fast convergence, especially in the initial iterations.

SIAM JOURNAL ON SCIENTIFIC COMPUTING (2021)

Article Mathematics, Applied

SLIMTRAIN-A STOCHASTIC APPROXIMATION METHOD FOR TRAINING SEPARABLE DEEP NEURAL NETWORKS

Elizabeth Newman, Julianne Chung, Matthias Chung, Lars Ruthotto

Summary: Deep neural networks (DNNs) have been successful in many applications but training them can be challenging due to nonconvexity, nonsmoothness, inadequate regularization, and complex data distributions. In this study, slimTrain is proposed to address the challenges by exploiting separability in DNN architectures, reducing sensitivity to hyperparameter choice and achieving fast initial convergence. Numerical experiments demonstrate the superior performance of slimTrain in function approximation tasks, outperforming existing DNN training methods.

SIAM JOURNAL ON SCIENTIFIC COMPUTING (2022)

Article Automation & Control Systems

A Neural Network Approach for High-Dimensional Optimal Control Applied to Multiagent Path Finding

Derek Onken, Levon Nurbekyan, Xingjian Li, Samy Wu Fung, Stanley Osher, Lars Ruthotto

Summary: In this study, a neural network approach is proposed to solve high-dimensional optimal control problems using approximate solutions. The approach combines the Hamilton-Jacobi-Bellman and Pontryagin maximum principle methods and provides real-time control through feedback. The method is effective in multiagent path finding and offers advantages in terms of time efficiency and scalability in high dimensions. Offline training of the neural network enables fast generation of control policies.

IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY (2023)

Article Mathematics, Applied

MGIC: MULTIGRID-IN-CHANNELS NEURAL NETWORK ARCHITECTURES

Moshe Eliasof, Jonathan Ephrath, Lars Ruthotto, Eran Treister

Summary: In this paper, a multigrid-in-channels (MGIC) approach is presented to address the quadratic growth of parameters in standard convolutional neural networks (CNNs) with respect to the number of channels. The proposed approach replaces each CNN block with an MGIC counterpart that utilizes nested grouped convolutions to reduce the number of parameters while maintaining coupling of the channels. Experimental results demonstrate the effectiveness of the proposed approach in various architectures, achieving parameter reduction without sacrificing accuracy.

SIAM JOURNAL ON SCIENTIFIC COMPUTING (2023)

Proceedings Paper Computer Science, Artificial Intelligence

Multivariate Quantile Function Forecaster

Kelvin Kan, Youngsuk Park, Francois-Xavier Aubet, Konstantinos Benidis, Jan Gasthaus, Tim Januschowski, Lars Ruthotto

Summary: We propose a global probabilistic forecasting method called Multivariate Quantile Function Forecaster (MQF(2)) and investigate its application to multi-horizon forecasting. MQF(2) combines the benefits of autoregressive and multi-horizon sequence-to-sequence models, achieving accurate predictions while capturing the time dependency structure.

INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151 (2022)

Proceedings Paper Automation & Control Systems

A Neural Network Approach Applied to Multi-Agent Optimal Control

Derek Onken, Levon Nurbekyan, Xingjian Li, Samy Wu Fung, Stanley Osher, Lars Ruthotto

Summary: "The proposed neural network approach combines different control theories, parameterizes the value function with neural networks, efficiently solves multi-agent control problems, and maintains robustness to system disturbances during training. By training on a distribution of initial states, it ensures optimality of controls across a large portion of state-space."

2021 EUROPEAN CONTROL CONFERENCE (ECC) (2021)

Article Mathematics, Applied

Train Like a (Var)Pro: Efficient Training of Neural Networks with Variable Projection

Elizabeth Newman, Lars Ruthotto, Joseph Hart, Bart van Bloemen Waanders

Summary: Deep neural networks (DNNs) have achieved state-of-the-art performance across various machine learning tasks by effectively approximating high-dimensional functions. This paper focuses on supervised training of DNNs and proposes the Gauss-Newton VarPro method (GNvpro) for optimizing weights to accurately approximate the relation between input and target data. Through numerical experiments, GNvpro is shown to be more efficient than commonly used stochastic gradient descent (SGD) schemes, providing solutions with good generalization performance.

SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE (2021)

Article Mathematics, Applied

ADMM-SOFTMAX: AN ADMM APPROACH FOR MULTINOMIAL LOGISTIC REGRESSION

Samy Wu Fung, Sanna Tyrvainen, Lars Ruthotto, Eldad Haber

ELECTRONIC TRANSACTIONS ON NUMERICAL ANALYSIS (2020)

Article Engineering, Electrical & Electronic

Gauss-Newton Optimization for Phase Recovery From the Bispectrum

James Lincoln Herring, James Nagy, Lars Ruthotto

IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING (2020)

Article Mathematics, Applied

Layer-Parallel Training of Deep Residual Neural Networks

Stefanie Guenther, Lars Ruthotto, Jacob B. Schroder, Eric C. Cyr, Nicolas R. Gauger

SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE (2020)

No Data Available