| Peer-Reviewed

Speculative Packet Dispatch for Virtual Output Queuing Architecture Using LSTM Recurrent Neural Network

Received: 27 April 2021    Accepted: 17 May 2021    Published: 27 May 2021
Views:       Downloads:
Abstract

Virtual Output Queuing (VOQ) is an architecture widely employed in modern networking products. Traffic from every ingress port is stored in a set of queues mirroring the structure of the egress ports. This architecture allows congestion on one egress port to be isolated from the other ports. A request-grant protocol is used to route packets from ingress to egress. When a packet is received, a request signal is issued. After the request reaches the egress side, a grant signal is generated based on some fixed threshold indicating there is space in the egress buffer to absorb the largest packet size dispatched from ingress. The buffer must be sized deep enough to accommodate in-flight traffic associated with a scenario where heavy congestion is found after the grant is issued. Awaiting a grant signal to arrive before dispatching packets incurs significant end-to-end latency. To alleviate this problem, a speculative packet dispatch approach (SPD) is proposed in which the request grant protocol is completely eliminated. Packets are dispatched speculatively from ingress to egress based on predictions that there is enough space in the egress buffer. This is achieved by incorporating an LSTM recurrent neural network as part of the VOQ controller. The LSTM is trained by time-series data sets generated from past observations on the queue occupancy. The experimental results show that SPD delivers excellent improvement on the system performance, reduces buffering requirements and preserves the property of VOQ.

Published in International Journal of Information and Communication Sciences (Volume 6, Issue 2)
DOI 10.11648/j.ijics.20210602.13
Page(s) 38-45
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2021. Published by Science Publishing Group

Keywords

Computer Networks, Virtual Output Queuing, Long Short Term Memory, Machine Learning, Recurrent Neural Network

References
[1] Zheng, L., Qiu, Z., Sun. S., Pan W. & Zhang, Z. (2018). Design and Analysis of a Parallel Hybrid Memory Architecture for Per-Flow Buffering in High-Speed Switches and Routers. Journal of Communications and Networks, 20 (6), 578-592.
[2] Stallings, W. (2014). Data and Computer Communications, Pearson Education Inc.
[3] Keith, R., & James, K. (2017). Computer Networking: A Top-Down Approach, Pearson Education Inc.
[4] Jamali, M., & Ghiasan, A. (2019). Randomised Scheduling Algorithm for Virtual Output Queuing Switch at the Presence of Non-uniform Traffic. IET Networks, 8 (2), 138-142.
[5] Juniper Networks. (2017, August). Understanding CoS Virtual Output Queues (VOQs) on QFX10000 Switches. Traffic Management Feature Guide.
[6] Cisco Systems. (2016, July). End-to-End QoS Implementation and Operation with Nexus [Unpublished White Paper]. USA.
[7] Cisco Systems. (2010). Cisco Nexus 5548P Switch Architecture [Unpublished White Paper]. USA.
[8] Yoshigoe, K., & Christensen, K. (2003). An Evolution to Crossbar Switches with Virtual Output Queueing and Buffered Cross Points. IEEE Network, 17 (5), 48-56.
[9] Xu, D. (2008). A Novel Scheduling Algorithm Based on Buffered Crossbar Switch Fabric in EPFTS Switch. Proceedings of International Conference on Computer Science and Software Engineering, China. https://doi.org/10.1109/CSSE.2008.1316.
[10] Yin, Y., & Yin, S. (2010). Packet Processing Method Used in Input Terminal Buffering Queue. Proceedings of International Conference on Industrial and Information Systems, China. https://doi.org/10.1109/INDUSIS.2010.5565899.
[11] Kyriakos, A., Patronas, I., Tzimas, G., Kitsakis, C., & Reisis, D. (2017). Realizing Virtual Output Queues in High Throughput Data Center Nodes. Proceedings of Panhellenic Conference on Electronics and Telecommunicitions, Greece. https://doi.org/10.1109/PACET.2017.8259971.
[12] Do, V., & Yun, K. (2002). Packet Latency Optimization for VOQs in Variable Length Packet Switches. Proceedings of IEEE Workshop on High Performance Switching and Routing, Japan. https://doi.org/10.1109/HPSR.2002.1024212.
[13] Cisco Systems. (2017). Load Balancing in Multipath Switch Fabric [Unpublised White Paper]. USA.
[14] Lemeshko, O., Lebedenko, T., Mersni, A., & Hailan, A. (2019). Mathematical Optimization Model of Congestion Management, Resource Allocation and Congestion Avoidance on Network Routers. Proceedings of International Conference on Information and Telecommunicaiton Technologies and Radio Electronics, Ukraine. https://doi.org/10.1109/UkrMiCo47782.2019.9165445.
[15] Cisco Systems. (2014). Multi-stage Congestion Management and Avoidance.[Unpublished White Paper]. USA.
[16] Murphy, K. (2012). Machine Learning: A Probabilistic Perspective, MIT Press.
[17] Geron, A. (2019). Hands-On Machine Learning with Scikit-Learn & TensorFlow, O’Reilly Media Inc.
[18] Haykin, S. (2009). Neural Networks and Learning Machines, Prentice Hall.
[19] Tax, N. (2018). Human Activity Prediction in Smart Home Environments with LSTM Neural Networks. Proceedings of International Conference on Intelligent Environments, Italy. https://doi.org/10.1109/IE.2018.00014.
[20] Olah, C. (2015). Understanding LSTM Networks. Available online: https://colah.github.io/posts/2015-08-Understanding-LSTMs.
[21] Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9 (8), 1735-1780.
[22] Charniak, E. (2018) Introduction to Deep Learning. MIT Press.
[23] Rikatsih, N., & Supianto, A. (2018). Classification of Posture Reconstruction with Univariate Time Series Data Type. Proceedings of International Conference on Sustainable Information Engineering and Technology, Indonesia. https://doi.org/10.1109/SIET.2018.8693174.
[24] Karim, F., Majumdar, S., & Darabi, H. (2019). Insights into LSTM Fully Convolutional Networks for Time Series Classification. IEEE Access, (7), 67718-67725.
Cite This Article
  • APA Style

    Alex Sumarsono, Mario Rodriguez. (2021). Speculative Packet Dispatch for Virtual Output Queuing Architecture Using LSTM Recurrent Neural Network. International Journal of Information and Communication Sciences, 6(2), 38-45. https://doi.org/10.11648/j.ijics.20210602.13

    Copy | Download

    ACS Style

    Alex Sumarsono; Mario Rodriguez. Speculative Packet Dispatch for Virtual Output Queuing Architecture Using LSTM Recurrent Neural Network. Int. J. Inf. Commun. Sci. 2021, 6(2), 38-45. doi: 10.11648/j.ijics.20210602.13

    Copy | Download

    AMA Style

    Alex Sumarsono, Mario Rodriguez. Speculative Packet Dispatch for Virtual Output Queuing Architecture Using LSTM Recurrent Neural Network. Int J Inf Commun Sci. 2021;6(2):38-45. doi: 10.11648/j.ijics.20210602.13

    Copy | Download

  • @article{10.11648/j.ijics.20210602.13,
      author = {Alex Sumarsono and Mario Rodriguez},
      title = {Speculative Packet Dispatch for Virtual Output Queuing Architecture Using LSTM Recurrent Neural Network},
      journal = {International Journal of Information and Communication Sciences},
      volume = {6},
      number = {2},
      pages = {38-45},
      doi = {10.11648/j.ijics.20210602.13},
      url = {https://doi.org/10.11648/j.ijics.20210602.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijics.20210602.13},
      abstract = {Virtual Output Queuing (VOQ) is an architecture widely employed in modern networking products. Traffic from every ingress port is stored in a set of queues mirroring the structure of the egress ports. This architecture allows congestion on one egress port to be isolated from the other ports. A request-grant protocol is used to route packets from ingress to egress. When a packet is received, a request signal is issued. After the request reaches the egress side, a grant signal is generated based on some fixed threshold indicating there is space in the egress buffer to absorb the largest packet size dispatched from ingress. The buffer must be sized deep enough to accommodate in-flight traffic associated with a scenario where heavy congestion is found after the grant is issued. Awaiting a grant signal to arrive before dispatching packets incurs significant end-to-end latency. To alleviate this problem, a speculative packet dispatch approach (SPD) is proposed in which the request grant protocol is completely eliminated. Packets are dispatched speculatively from ingress to egress based on predictions that there is enough space in the egress buffer. This is achieved by incorporating an LSTM recurrent neural network as part of the VOQ controller. The LSTM is trained by time-series data sets generated from past observations on the queue occupancy. The experimental results show that SPD delivers excellent improvement on the system performance, reduces buffering requirements and preserves the property of VOQ.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Speculative Packet Dispatch for Virtual Output Queuing Architecture Using LSTM Recurrent Neural Network
    AU  - Alex Sumarsono
    AU  - Mario Rodriguez
    Y1  - 2021/05/27
    PY  - 2021
    N1  - https://doi.org/10.11648/j.ijics.20210602.13
    DO  - 10.11648/j.ijics.20210602.13
    T2  - International Journal of Information and Communication Sciences
    JF  - International Journal of Information and Communication Sciences
    JO  - International Journal of Information and Communication Sciences
    SP  - 38
    EP  - 45
    PB  - Science Publishing Group
    SN  - 2575-1719
    UR  - https://doi.org/10.11648/j.ijics.20210602.13
    AB  - Virtual Output Queuing (VOQ) is an architecture widely employed in modern networking products. Traffic from every ingress port is stored in a set of queues mirroring the structure of the egress ports. This architecture allows congestion on one egress port to be isolated from the other ports. A request-grant protocol is used to route packets from ingress to egress. When a packet is received, a request signal is issued. After the request reaches the egress side, a grant signal is generated based on some fixed threshold indicating there is space in the egress buffer to absorb the largest packet size dispatched from ingress. The buffer must be sized deep enough to accommodate in-flight traffic associated with a scenario where heavy congestion is found after the grant is issued. Awaiting a grant signal to arrive before dispatching packets incurs significant end-to-end latency. To alleviate this problem, a speculative packet dispatch approach (SPD) is proposed in which the request grant protocol is completely eliminated. Packets are dispatched speculatively from ingress to egress based on predictions that there is enough space in the egress buffer. This is achieved by incorporating an LSTM recurrent neural network as part of the VOQ controller. The LSTM is trained by time-series data sets generated from past observations on the queue occupancy. The experimental results show that SPD delivers excellent improvement on the system performance, reduces buffering requirements and preserves the property of VOQ.
    VL  - 6
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Department of Computer Engineering, California State University East Bay, Hayward, USA

  • Department of Computer Engineering, California State University East Bay, Hayward, USA

  • Sections