Research Article | | Peer-Reviewed

Influence of Neural Network Learning Algorithms on High Power Amplifier (HPA) Predistortion Performance

Received: 20 October 2025     Accepted: 3 November 2025     Published: 9 December 2025
Views:       Downloads:
Abstract

In this article, we propose a feedforward neural network model designed to approximate the inverse transfer characteristic of a High-Power Amplifier (HPA) in order to linearize it using Digital Predistortion (DPD). This approach is particularly relevant for next-generation communication systems, such as those employing OTFS (Orthogonal Time Frequency Space) modulation envisioned for 6G, whose signals exhibit large amplitude variations that exacerbate amplifier nonlinearities. The performance of predistortion heavily depends on the learning algorithm used to train the neural model. We compared three optimization algorithms: Gradient Descent, Gauss-Newton, and Levenberg-Marquardt. The amplifier is modeled using the Rapp model. The neural network architecture consists of a single input neuron, a hidden layer with ten neurons using the hyperbolic tangent activation function, and a linear output neuron. Training and simulations were carried out in MATLAB, and the performance of each algorithm was evaluated using the Mean Squared Error (MSE) criterion, which quantifies the deviation between the ideal transfer characteristic of a linear amplifier and the characteristic obtained after predistortion. The results clearly show that the Levenberg-Marquardt algorithm provides the best approximation of the predistortion function, achieving an MSE on the order of 4.2708×10-8, significantly outperforming Gauss-Newton 1.0481×10-4 and Gradient Descent (0.0272). This superior performance is attributed to Levenberg-Marquardt’s ability to combine the robustness of Gradient Descent with the fast convergence of Gauss-Newton, while avoiding local minimum and issues related to poor synaptic weight initialization.

Published in American Journal of Neural Networks and Applications (Volume 11, Issue 2)
DOI 10.11648/j.ajnna.20251102.15
Page(s) 88-96
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Predistortion, Neural Network, Training Algorithm, Amplifier, Approximation

References
[1] Li, W., Montoro, G., & Gilabert, P. L. (2025). Adaptive Digital Predistortion for User Equipment Power Amplifiers Under Dynamic Frequency, Bandwidth, and VSWR Variations. IEEE Transactions on Microwave Theory and Techniques, PP (99): 1-13.
[2] Wu, Y., Li, A., Beikmirza, M., Singh, G. D., Chen, Q., de Vreede, L. C. N., Alavi, M., & Gao, C. (2024). MP-DPD: Low-Complexity Mixed-Precision Neural Networks for En-ergy-Efficient Digital Predistortion of Wideband Power Amplifiers, IEEE Microwave and Wireless Technology Letters,
[3] Spano, C., Badini, D., Cazzella, L., & Matteucci, M. (2025). Local and Remote Digital Pre-Distortion for 5G Power Amplifiers with Safe Deep Reinforcement Learning. Sensors, 25(19), 6102.
[4] Thys, C., Martinez Alonso, R., Lhomel, A., Fellmann, M., Deltimple, N., Rivet, F., & Pollin, S. (2024). Walsh-domain Neural Network for Power Amplifier Behavioral Modelling and Digital Predistortion. IEEE International Symposium on Circuits and Systems (ISCAS),
[5] Honda, H. (2023). Universal approximation property of a continuous neural network based on a nonlinear diffusion equation. Advances in Continuous and Discrete Models, 2023, Springer.
[6] Ismailov, V. (2024). Universal approximation theorem for neural networks with inputs from a topological vector space. arXiv preprint,
[7] Geonho, H., Wonyeol, L., Yeachan, P., (2025). Float-ing-Point Neural Networks Are Provably Robust Universal Approximators. Arxiv,
[8] Vital, W. L., Vieira, G., & Valle, M. E. (2022). Extending the Universal Approximation Theorem for a Broad Class of Hypercomplex-Valued Neural Networks. Arxiv, 158, 563-575.
[9] Aggarwal, C. C. (2023). Neural Networks and Deep Learning: A Textbook (2ᵉ éd.). Springer Cham.
[10] Maharajan, C., Chandran, S., Changjin, X., Lagrange stability of inertial type neural networks: A new LKF approach, Evolving Systems 16 (2025) 63. https://doi.org/10.1007/s12530-025-09681-1
[11] Qingyi, C., Changjin, X., Bifurcation and controller design of 5D BAM neural networks with time delay, International Journal of Numerical Modelling: Electronic Networks, Devices and Fields (2024).
[12] Jo, T. (2022). Machine Learning Foundations: Supervised, Unsupervised, and Advanced Learning. Springer Cham.
[13] Cerulli, G. (2023). Fundamentals of Supervised Machine Learning: With Applications in Python, R, and Stata. Springer Cham.
[14] Amini, M.-R. (2025). Advanced Supervised and Semi-supervised Learning: Theory and Algorithms. Springer Cham.
[15] Qingyi, C., Changjin, X, Further study on Hopf bifurcation and hybrid control strategy in BAM neural networks concerning time delay, AIMS Mathematics 9(5) (2024) 13265–13290,
[16] Maharajan, C., Chandran, S., Changjin, X., Delay dependent complex-valued bidirectional associative memory neural networks with stochastic and impulsive effects: an exponential stability approach, Kybernetika 60(3) (2024) 317–356,
[17] Kostrzewska, K., & Kryszkiewicz, P. (2024). Power Amplifier Modeling Framework for Front-End-Aware Next-Generation Wireless Networks. Electronics, Mdpi.
Cite This Article
  • APA Style

    Rakotonirina, H. B., Randrianandrasana, M. E. (2025). Influence of Neural Network Learning Algorithms on High Power Amplifier (HPA) Predistortion Performance. American Journal of Neural Networks and Applications, 11(2), 88-96. https://doi.org/10.11648/j.ajnna.20251102.15

    Copy | Download

    ACS Style

    Rakotonirina, H. B.; Randrianandrasana, M. E. Influence of Neural Network Learning Algorithms on High Power Amplifier (HPA) Predistortion Performance. Am. J. Neural Netw. Appl. 2025, 11(2), 88-96. doi: 10.11648/j.ajnna.20251102.15

    Copy | Download

    AMA Style

    Rakotonirina HB, Randrianandrasana ME. Influence of Neural Network Learning Algorithms on High Power Amplifier (HPA) Predistortion Performance. Am J Neural Netw Appl. 2025;11(2):88-96. doi: 10.11648/j.ajnna.20251102.15

    Copy | Download

  • @article{10.11648/j.ajnna.20251102.15,
      author = {Hariony Bienvenu Rakotonirina and Marie Emile Randrianandrasana},
      title = {Influence of Neural Network Learning Algorithms on High Power Amplifier (HPA) Predistortion Performance},
      journal = {American Journal of Neural Networks and Applications},
      volume = {11},
      number = {2},
      pages = {88-96},
      doi = {10.11648/j.ajnna.20251102.15},
      url = {https://doi.org/10.11648/j.ajnna.20251102.15},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajnna.20251102.15},
      abstract = {In this article, we propose a feedforward neural network model designed to approximate the inverse transfer characteristic of a High-Power Amplifier (HPA) in order to linearize it using Digital Predistortion (DPD). This approach is particularly relevant for next-generation communication systems, such as those employing OTFS (Orthogonal Time Frequency Space) modulation envisioned for 6G, whose signals exhibit large amplitude variations that exacerbate amplifier nonlinearities. The performance of predistortion heavily depends on the learning algorithm used to train the neural model. We compared three optimization algorithms: Gradient Descent, Gauss-Newton, and Levenberg-Marquardt. The amplifier is modeled using the Rapp model. The neural network architecture consists of a single input neuron, a hidden layer with ten neurons using the hyperbolic tangent activation function, and a linear output neuron. Training and simulations were carried out in MATLAB, and the performance of each algorithm was evaluated using the Mean Squared Error (MSE) criterion, which quantifies the deviation between the ideal transfer characteristic of a linear amplifier and the characteristic obtained after predistortion. The results clearly show that the Levenberg-Marquardt algorithm provides the best approximation of the predistortion function, achieving an MSE on the order of 4.2708×10-8, significantly outperforming Gauss-Newton 1.0481×10-4 and Gradient Descent (0.0272). This superior performance is attributed to Levenberg-Marquardt’s ability to combine the robustness of Gradient Descent with the fast convergence of Gauss-Newton, while avoiding local minimum and issues related to poor synaptic weight initialization.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Influence of Neural Network Learning Algorithms on High Power Amplifier (HPA) Predistortion Performance
    AU  - Hariony Bienvenu Rakotonirina
    AU  - Marie Emile Randrianandrasana
    Y1  - 2025/12/09
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajnna.20251102.15
    DO  - 10.11648/j.ajnna.20251102.15
    T2  - American Journal of Neural Networks and Applications
    JF  - American Journal of Neural Networks and Applications
    JO  - American Journal of Neural Networks and Applications
    SP  - 88
    EP  - 96
    PB  - Science Publishing Group
    SN  - 2469-7419
    UR  - https://doi.org/10.11648/j.ajnna.20251102.15
    AB  - In this article, we propose a feedforward neural network model designed to approximate the inverse transfer characteristic of a High-Power Amplifier (HPA) in order to linearize it using Digital Predistortion (DPD). This approach is particularly relevant for next-generation communication systems, such as those employing OTFS (Orthogonal Time Frequency Space) modulation envisioned for 6G, whose signals exhibit large amplitude variations that exacerbate amplifier nonlinearities. The performance of predistortion heavily depends on the learning algorithm used to train the neural model. We compared three optimization algorithms: Gradient Descent, Gauss-Newton, and Levenberg-Marquardt. The amplifier is modeled using the Rapp model. The neural network architecture consists of a single input neuron, a hidden layer with ten neurons using the hyperbolic tangent activation function, and a linear output neuron. Training and simulations were carried out in MATLAB, and the performance of each algorithm was evaluated using the Mean Squared Error (MSE) criterion, which quantifies the deviation between the ideal transfer characteristic of a linear amplifier and the characteristic obtained after predistortion. The results clearly show that the Levenberg-Marquardt algorithm provides the best approximation of the predistortion function, achieving an MSE on the order of 4.2708×10-8, significantly outperforming Gauss-Newton 1.0481×10-4 and Gradient Descent (0.0272). This superior performance is attributed to Levenberg-Marquardt’s ability to combine the robustness of Gradient Descent with the fast convergence of Gauss-Newton, while avoiding local minimum and issues related to poor synaptic weight initialization.
    VL  - 11
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Sections