4.7 Article

State of charge estimation of an electric vehicle's battery using tiny neural network embedded on small microcontroller units

期刊

INTERNATIONAL JOURNAL OF ENERGY RESEARCH
卷 46, 期 6, 页码 8102-8119

出版社

WILEY-HINDAWI
DOI: 10.1002/er.7713

关键词

1D CNN; data-driven; GRU neural network; lithium-ion battery; machine learning; quantization; state of charge

向作者/读者索取更多资源

This paper investigates new SOC estimation strategies for electric vehicles using 1D CNN and GRU-RNN, finding that the 1D CNN model is more accurate. Furthermore, it compares the quantization impact of using STM32Cube.AI versus TFLite Micro, with the former reducing the model size significantly more than the latter.
The latest breakthroughs in artificial intelligence resulted in their adoption in the electric vehicle's battery management systems. Simultaneously, the remarkable increase in the number of embedded systems in electric vehicles (EVs) has led to the search for new optimized state of charge (SOC) estimation strategies suitable for implementation on small and limited MCUs. In this regard, this paper provides a design and quantization processes of the 1D convolutional neural network (1D CNN) and the GRU-recurrent neural network (GRU-RNN), devoted for SOC estimation. Both the NN algorithms are designed and trained using data collected from an 18650 Li-ion battery. Furthermore, to account for the limited computational resources of the EV MCUs, the pre-trained models are then converted into highly optimized C-codes using the latest model quantification techniques. In this regard, this paper investigates, the performances of the STM32Cube.AI and the TensorFlow Lite for microcontroller (TFLite Micro), in quantizing the activation functions and weights of the proposed NN models. This is achieved by converting these latter from 32-bits floating-point to 8-bit integer precision. The obtained experimental results, under different profiles, indicate that the 1D CNN is more accurate than the GRU-based model with root mean squared error (RMSE) of 2.33% and mean absolute error (MAE) of 1.62%. Furthermore, the impact of the quantization using STM32Cube.AI is compared with that using the TFLite Micro. The obtained results, demonstrated the superiority of the 1D CNN model quantized using the STM32Cube.AI. This latter reduces the flash memory size of the 1D CNN model from 10.82 to 2.89 KB and the RAM size from 2.49 to 1.04 KB, compared to the TFLite Micro, which reduces the RAM size from 7.0107 to 4.23 KB, and the flash from 43.361 to 15.88 KB.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据