| Peer-Reviewed

Enhancing Parallel Scheduling of Grid Jobs in a Multicored Environment

Received: 17 May 2021    Accepted: 9 June 2021    Published: 21 June 2021
Views:       Downloads:
Abstract

The computing Grid has emerged as a platform to solve the complex and ever-increasing processing need of man and advances in computing technology have birthed the multicore era aimed for high throughput and efficient parallel computing. However, most systems still rely on the underlying hardware for parallelism despite the hard evidence that sequential algorithms do not optimally exploit parallel systems. This research seeks to harness the benefits of multicore systems using job and machine grouping methods to enhance parallelism in the scheduling of Grid jobs. The paper presents the result of two separate experiments on a method that parallelize scheduling algorithm on two multicore platforms. An arbitrary method was employed to group machines; a summation of the total processing power of machines in each group was made. To ensure load balancing, jobs were allocated to machine groups based on the ratio of the total processing power of the machines in each group. The MinMin Grid scheduling algorithm was implemented independently within the groups using a range of threads varied in powers of two. Also, the numbers of groups were varied between 2, 4, and 8. The same experiment was executed on a single processor computer; a duocore machine and a quadcore machine. A performance improvement of 16% to 85% was recorded by the group method against the best ordinary MinMin results and an improvement of 50% to 84% was recorded by the group method against the ordinary MinMin on corresponding machines. We prove that an increase in the number of groups results in improved performance on corresponding machines (approximately 2 times using 2 groups, approximately 3 times using four groups, and approximately 6 times using 8 groups). And most importantly, we established that as the number of processors increases, the grouping method makes more significant improvements over the ordinary MinMin scheduling algorithm executed on the multicore systems.

Published in Mathematics and Computer Science (Volume 6, Issue 3)
DOI 10.11648/j.mcs.20210603.12
Page(s) 49-58
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Multicore-environment, Parallelism, Multi-scheduling, Machine Grouping, Job Grouping, Scheduling

References
[1] A. Chervenak, I. Foster, C. Kesselman, C. and Salisbury, and S. Tueke, “The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets,” Journal of Network and Computer Applications, vol. 23, no. 3, pp. 187–200, 2000, Accessed: Mar. 04, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1084804500901103.
[2] D. Klusáček, H. Rudová, R. Baraglia, M. Pasquali, and G. Capannini, “Comparison Of Multi-Criteria Scheduling Techniques,” in Grid Computing, Springer US, 2008, pp. 173–184.
[3] S. Zhuravlev, J. C. Saez, S. Blagodurov, A. Fedorova, and M. Prieto, “Survey of scheduling techniques for addressing shared in multicore processors,” ACM Reference Format, vol. 45, no. 4, Nov. 2012, doi: 10.1145/2379776.2379780.
[4] G. M. Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities,” in AFIPS Conference Proceedings - 1967 Spring Joint Computer Conference, AFIPS 1967, Apr. 1967, pp. 483–485, doi: 10.1145/1465482.1465560.
[5] C. Kessler, U. Dastgeer, and L. Li, “Optimized Composition: Generating Efficient Code for Heterogeneous Systems from Multi-Variant Components, Skeletons and Containers,” Skeletons and Containers. arXiv preprint arXiv, vol. 1405.2915, May 2014, Accessed: Mar. 06, 2020. [Online]. Available: http://arxiv.org/abs/1405.2915.
[6] J. Larus, “Spending Moore’s dividend,” in Communications of the ACM, May 2009, vol. 52, no. 5, pp. 62–69, doi: 10.1145/1506409.1506425.
[7] M. Gebremedhin, “Automatic and Explicit Parallelization Approaches for Equation Based Mathematical Modeling and Simulation,” 2018, Accessed: Apr. 21, 2021. [Online]. Available: https://www.diva-portal.org/smash/record.jsf?pid=diva2:1265975.
[8] H. Jin, D. Jespersen, P. Mehrotra, R. Biswas, L. Huang, and B. Chapman, “High performance computing using MPI and OpenMP on multi-core parallel systems,” Parallel Computing, vol. 37, no. 9, pp. 562–575, 2011, doi: 10.1016/j.parco.2011.02.002.
[9] B. Mustafa, S. and Rafiya, and A. Waseem, “Parallel Implementattion of Doolittle Algorithm using OpenMP for multicore machines,” in 2015 IEEE International Advance Computing Conference, 2015, pp. 575–578.
[10] P. Tendulkar, “Mapping and Scheduling on Multi-core Processors using SMT Solvers,” 2014.
[11] G. T. Abraham, A. and James, and N. Yaacob, “Priority-grouping method for parallel multi-scheduling in Grid,” Journal of Computer and System Sciences, vol. 81, no. 6, pp. 943–957, 2015, doi: 10.1016/j.jcss.2014.12.009.
[12] S. A. Mirsoleimani, A. Karami, and F. Khunjush, “A parallel memetic algorithm on GPU to solve the task scheduling problem in heterogeneous environments,” in GECCO 2013 - Proceedings of the 2013 Genetic and Evolutionary Computation Conference, 2013, pp. 1181–1188, doi: 10.1145/2463372.2463518.
[13] F. Peper, “The End of Moore’s Law: Opportunities for Natural Computing?,” New Generation Computing, vol. 35, no. 3, pp. 253–269, Jul. 2017, doi: 10.1007/s00354-017-0020-4.
[14] B. Schauer, “Discovery Guides Multicore Processors-A Necessity,” 2008. Accessed: Mar. 05, 2020. [Online]. Available: http://www.netrino.com/node/91.
[15] G. Bell, “Bell’s law for the birth and death of computer classes: A theory of the computer’s evolution,” IEEE Solid-State Circuits Society Newsletter, vol. 13, no. 4, pp. 8–19, 2008.
[16] S. Eyerman and L. Eeckhout, “Modeling critical sections in Amdahl’s law and its implications for multicore design,” in Proceedings - International Symposium on Computer Architecture, 2010, pp. 362–370, doi: 10.1145/1815961.1816011.
[17] N. Shavit, “Data structures in the multicore age,” Communications of the ACM, vol. 54, no. 3, pp. 76–84, Mar. 2011, doi: 10.1145/1897852.1897873.
[18] L. Silva, “Computing data cubes over GPU clusters.,” 2018, Accessed: Mar. 04, 2020. [Online]. Available: https://www.monografias.ufop.br/handle/35400000/1527.
[19] Y. Ngoko and D. Trystram, “Enhancing the undergraduate curriculum, Performance, concurrency and programming on modern platform,” in Topics in Parallel and Distributed Computing, vol. 1st Edition, 2018, pp. 337-undefined.
[20] R. E. N. Bin, S. Balakrishna, Y. Jo, S. Krishnamoorthy, K. Agrawal, and M. Kulkarni, “Extracting SIMD parallelism from recursive task-parallel programs,” ACM Transactions on Parallel Computing, vol. 6, no. 4, pp. 1–37, Dec. 2019, doi: 10.1145/3365663.
[21] S. K. Roy, R. Devaraj, A. Sarkar, K. Maji, and S. Sinha, “Contention-aware optimal scheduling of real-time precedence-constrained task graphs on heterogeneous distributed systems,” Journal of Systems Architecture, vol. 105, p. 101706, May 2020, doi: 10.1016/j.sysarc.2019.101706.
[22] M. M. Javanmard, Z. Ahmad, M. Kong, L.-N. Pouchet, R. Chowdhury, and R. Harrison, “Deriving parametric multi-way recursive divide-and-conquer dynamic programming algorithms using polyhedral compilers,” in Proceedings of the 18th ACM/IEEE International Symposium on Code Generation and Optimization, Feb. 2020, pp. 317–329, doi: 10.1145/3368826.3377916.
[23] Q. Tang, L.-H. Zhu, J. Lian, L. Zhou, and J.-B. Wei, “An efficient multi-functional duplication-based scheduling framework for multiprocessor systems,” The Journal of Supercomputing, pp. 1–26, Feb. 2020, doi: 10.1007/s11227-020-03208-y.
[24] B. N., Chandrashekhar, H. A., and Sanjay, and T. Srinivas, “Performance Analysis of Parallel Programming Paradigms on CPU-GPU Clusters,” in International Conference on Artificial Intelligence and Smart Systems (ICAIS), 2021, pp. 646–651, Accessed: Apr. 19, 2021. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/9395977/.
[25] Y. Zou, Y. Zhu, Y. Li, F.-X. Wu, and J. Wang, “Parallel computing for genome sequence processing,” Briefings in Bioinformatics, Apr. 2021, doi: 10.1093/bib/bbab070.
[26] D. Tran, S. Aslam, N. Gorius, and G. Nehmetallah, “Parallel Computation of CRC-Code on an FPGA Platform for High Data Throughput,” Electronics, vol. 10, no. 7, p. 866, 2021, Accessed: Apr. 23, 2021. [Online]. Available: https://www.mdpi.com/2079-9292/10/7/866.
[27] M. Salvana et al., “High Performance Multivariate Geospatial Statistics on Manycore Systems,” IEEE Transactions on Parallel and Distributed Systems, 2021, doi: 10.1109/TPDS.2021.3071423.
[28] F. Baig, “High Performance Spatial and Spatio-Temporal Big Data Processing,” 2021. Accessed: Apr. 23, 2021. [Online].
[29] M. J. Bridges, N. Vachharajani, Y. Zhang, T. Jablin, and D. I. August, “Revisiting the sequential programming model for the multicore era,” IEEE Micro, vol. 28, no. 1, pp. 12–20, Jan. 2008, doi: 10.1109/MM.2008.13.
[30] S. H. Fuller and L. I. Millett, The future of computing performance: Game over or next level? National Academies Press, 2011.
[31] C. Ernemann, V. Hamscher, U. Schwiegelshohn, R. Yahyapour, and A. Streit, “On Advantages of Grid Computing for Parallel Job Scheduling,” in 2nd IEEE/ACM International symposium on Cluster Computing and the Grid, 2002, pp. 339–39, Accessed: Mar. 09, 2020. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/1540439/.
[32] N. Muthuvelu, J. Liu, L. Soe, S. Venugopal, A. Sulistio, and R. Buyya, “A Dynamic Job Grouping-Based Scheduling for Deploying Applications with Fine-Grained Tasks on Global Grids,” in Australian Workshop on Grid computing and e-research, 2005, pp. 41–48, Accessed: Mar. 09, 2020. [Online]. Available: https://dl.acm.org/citation.cfm?id=1082297.
[33] V. K. Soni, R. and Sharma, and K. Mishra, Manoj, Grouping-based job scheduling model in grid computing, vol. 41. 2010.
[34] G. T. Abraham, A. and James, and N. Yaacob, “Group-based Parallel Multi-scheduler for Grid computing,” Future Generation Computer Systems, vol. 50, pp. 140–153, 2015, doi: 10.1016/j.future.2015.01.012.
[35] G. T. Abraham, “Group-based parallel multi-scheduling methods for grid computing,” Coventry University, 2016.
[36] G. T. Abraham and E. F. Osaisai, “Parallel Scheduling of Grid Jobs on Duo-core Systems Using Grouping Method,” International Journal of Current Research, vol. in Publica.
[37] G. T. Abraham, E. F. and Osaisai, and N. Dienagha “Parallel Scheduling of Grid Jobs on Quadcore Systems Using Grouping Methods,” Asian Journal of Research in Computer Science, vol. 8, no. 4, pp. 21–34, 2021, [Online]. Available: https://doi.org/10.9734/ajrcos/2021/v8i430207.
[38] O. H. Ibarra and C. E. Kim, “Heuristic algorithms for scheduling independent tasks on nonidentical processors,” Journal of the ACM, vol. 24, no. 2, pp. 280–289, 1977, Accessed: May 06, 2021. [Online]. Available: https://dl.acm.org/doi/abs/10.1145/322003.322011.
[39] S. Nesmachnow and M. Canabé, “GPU implementations of scheduling heuristics for heterogeneous computing environments,” 2011, Accessed: Mar. 09, 2020. [Online]. Available: http://sedici.unlp.edu.ar/handle/10915/18652.
[40] M. Canabe and S. Nesmachnow, “Parallel implementations of the MinMin heterogeneous computing scheduler in GPU,” CLEI Electronic Journal, vol. 15, no. 3, pp. 8–8, 2012, Accessed: Feb. 10, 2020. [Online]. Available: http://www.scielo.edu.uy/scielo.php?pid=S0717-50002012000300009&script=sci_arttext&tlng=pt.
[41] K. Etminani and M. Naghibzadeh, “A min-min max-min selective algorihtm for grid task scheduling,” in in 2007 3rd IEEE/IFIP International Conference in Central Asia on Internet, 2007, pp. 1–7, Accessed: May 06, 2021. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/4401694/.
[42] Freund Richard, Taylor Kidd, Hensgen Debbie, and Lantz Moore, “SmartNet: a scheduling framework for heterogeneous computing,” in Second International Symposium on Parallel Architectures, Algorithms and Networks (IEEE 96), 1996, pp. 514–521, Accessed: May 06, 2021. [Online]. Available: https://ieeexplore.ieee.org/abstract/document/509034/.
[43] M. Lavanya, B. and Shanthi, and S. Saravanan, “Multi objective task scheduling algorithm based on SLA and processing time suitable for cloud environment,” Computer Communications, vol. 151, pp. 183–195, 2020, Accessed: May 06, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S014036641930492X.
[44] M. Maheswaran, S. Ali, H. Siegel, D. and Hensgen, and R. F. Freund, “Dynamic mapping of a class of independent tasks onto heterogeneous computing systems,” Journal of Parallel and Distributed Computing, vol. 59, no. 2, pp. 107–131, 1999, Accessed: May 06, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0743731599915812.
[45] S. K. Mishra and B. Sahoo, “Load balancing in cloud computing: A big picture,” Journal of King Saud University - Computer and Information Sciences, vol. 32, no. 2, pp. 149–158, 2020, Accessed: May 06, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1319157817303361.
[46] F. Pinel, B. Dorronsoro, and P. Bouvry, “Solving very large instances of the scheduling of independent tasks problem on the GPU,” Journal of Parallel and Distributed Computing, vol. 73, no. 1, pp. 101–110, 2012, doi: 10.1016/j.jpdc.2012.02.018.
[47] Zhou, Z., Li, F., Zhu, H., Xie, H., Jemal, H. A. and, & Morshed, U. C. (2020). An improved genetic algorithm using greedy strategy toward task scheduling optimization in cloud environments. Neural Computing and Applications, 32 (6), 1531–1541. https://link.springer.com/article/10.1007/s00521-019-04119-7
[48] Zhuravlev, S., Saez, J. C., Blagodurov, S., Fedorova, A., & Prieto, M. (2012). Survey of scheduling techniques for addressing shared in multicore processors. ACM Reference Format, 45 (4). https://doi.org/10.1145/2379776.2379780
Cite This Article
  • APA Style

    Goodhead Tomvie Abraham, Evans Fiebibiseighe Osaisai, Abalaba Ineyekineye. (2021). Enhancing Parallel Scheduling of Grid Jobs in a Multicored Environment. Mathematics and Computer Science, 6(3), 49-58. https://doi.org/10.11648/j.mcs.20210603.12

    Copy | Download

    ACS Style

    Goodhead Tomvie Abraham; Evans Fiebibiseighe Osaisai; Abalaba Ineyekineye. Enhancing Parallel Scheduling of Grid Jobs in a Multicored Environment. Math. Comput. Sci. 2021, 6(3), 49-58. doi: 10.11648/j.mcs.20210603.12

    Copy | Download

    AMA Style

    Goodhead Tomvie Abraham, Evans Fiebibiseighe Osaisai, Abalaba Ineyekineye. Enhancing Parallel Scheduling of Grid Jobs in a Multicored Environment. Math Comput Sci. 2021;6(3):49-58. doi: 10.11648/j.mcs.20210603.12

    Copy | Download

  • @article{10.11648/j.mcs.20210603.12,
      author = {Goodhead Tomvie Abraham and Evans Fiebibiseighe Osaisai and Abalaba Ineyekineye},
      title = {Enhancing Parallel Scheduling of Grid Jobs in a Multicored Environment},
      journal = {Mathematics and Computer Science},
      volume = {6},
      number = {3},
      pages = {49-58},
      doi = {10.11648/j.mcs.20210603.12},
      url = {https://doi.org/10.11648/j.mcs.20210603.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.mcs.20210603.12},
      abstract = {The computing Grid has emerged as a platform to solve the complex and ever-increasing processing need of man and advances in computing technology have birthed the multicore era aimed for high throughput and efficient parallel computing. However, most systems still rely on the underlying hardware for parallelism despite the hard evidence that sequential algorithms do not optimally exploit parallel systems. This research seeks to harness the benefits of multicore systems using job and machine grouping methods to enhance parallelism in the scheduling of Grid jobs. The paper presents the result of two separate experiments on a method that parallelize scheduling algorithm on two multicore platforms. An arbitrary method was employed to group machines; a summation of the total processing power of machines in each group was made. To ensure load balancing, jobs were allocated to machine groups based on the ratio of the total processing power of the machines in each group. The MinMin Grid scheduling algorithm was implemented independently within the groups using a range of threads varied in powers of two. Also, the numbers of groups were varied between 2, 4, and 8. The same experiment was executed on a single processor computer; a duocore machine and a quadcore machine. A performance improvement of 16% to 85% was recorded by the group method against the best ordinary MinMin results and an improvement of 50% to 84% was recorded by the group method against the ordinary MinMin on corresponding machines. We prove that an increase in the number of groups results in improved performance on corresponding machines (approximately 2 times using 2 groups, approximately 3 times using four groups, and approximately 6 times using 8 groups). And most importantly, we established that as the number of processors increases, the grouping method makes more significant improvements over the ordinary MinMin scheduling algorithm executed on the multicore systems.},
     year = {2021}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Enhancing Parallel Scheduling of Grid Jobs in a Multicored Environment
    AU  - Goodhead Tomvie Abraham
    AU  - Evans Fiebibiseighe Osaisai
    AU  - Abalaba Ineyekineye
    Y1  - 2021/06/21
    PY  - 2021
    N1  - https://doi.org/10.11648/j.mcs.20210603.12
    DO  - 10.11648/j.mcs.20210603.12
    T2  - Mathematics and Computer Science
    JF  - Mathematics and Computer Science
    JO  - Mathematics and Computer Science
    SP  - 49
    EP  - 58
    PB  - Science Publishing Group
    SN  - 2575-6028
    UR  - https://doi.org/10.11648/j.mcs.20210603.12
    AB  - The computing Grid has emerged as a platform to solve the complex and ever-increasing processing need of man and advances in computing technology have birthed the multicore era aimed for high throughput and efficient parallel computing. However, most systems still rely on the underlying hardware for parallelism despite the hard evidence that sequential algorithms do not optimally exploit parallel systems. This research seeks to harness the benefits of multicore systems using job and machine grouping methods to enhance parallelism in the scheduling of Grid jobs. The paper presents the result of two separate experiments on a method that parallelize scheduling algorithm on two multicore platforms. An arbitrary method was employed to group machines; a summation of the total processing power of machines in each group was made. To ensure load balancing, jobs were allocated to machine groups based on the ratio of the total processing power of the machines in each group. The MinMin Grid scheduling algorithm was implemented independently within the groups using a range of threads varied in powers of two. Also, the numbers of groups were varied between 2, 4, and 8. The same experiment was executed on a single processor computer; a duocore machine and a quadcore machine. A performance improvement of 16% to 85% was recorded by the group method against the best ordinary MinMin results and an improvement of 50% to 84% was recorded by the group method against the ordinary MinMin on corresponding machines. We prove that an increase in the number of groups results in improved performance on corresponding machines (approximately 2 times using 2 groups, approximately 3 times using four groups, and approximately 6 times using 8 groups). And most importantly, we established that as the number of processors increases, the grouping method makes more significant improvements over the ordinary MinMin scheduling algorithm executed on the multicore systems.
    VL  - 6
    IS  - 3
    ER  - 

    Copy | Download

Author Information
  • Computer Science Department, Niger Delta University, Yenagoa, Nigeria

  • Mathematics Department, Niger Delta University, Yenagoa, Nigeria

  • Mathematics Department, Niger Delta University, Yenagoa, Nigeria

  • Sections