Research Article | | Peer-Reviewed

AI Work Quantization Model: Closed-System AI Computational Effort Metric

Received: 12 March 2025     Accepted: 1 April 2025     Published: 21 June 2025
Views:       Downloads:
Abstract

The rapid adoption of AI-driven automation in IoT environments, particularly in smart cities and industrial systems, necessitates a standardized approach to quantify AI’s computational workload. Existing methodologies lack a consistent framework for measuring AI computational effort across diverse architectures, posing challenges in fair taxation models and energy-aware workload assessments. This study introduces the Closed-System AI Computational Effort Metric, a theoretical framework that quantifies real-time computational effort by incorporating input/output complexity, execution dynamics, and hardware-specific performance factors. The model ensures comparability between AI workloads across traditional CPUs and modern GPU/TPU accelerators, facilitating standardized performance evaluations. Additionally, we propose an energy-aware extension to assess AI’s environmental impact, enabling sustainability-focused AI optimizations and equitable taxation models. Our findings establish a direct correlation between AI workload and human productivity, where 5 AI Workload Units equate to approximately 60-72 hours of human labor-exceeding a full-time workweek. By systematically linking AI computational effort to human labor, this framework enhances the understanding of AI’s role in workforce automation, industrial efficiency, and sustainable computing. Future work will focus on refining the model through dynamic workload adaptation, complexity normalization, and energy-aware AI cost estimation, further broadening its applicability in diverse AI-driven ecosystems.

Published in American Journal of Artificial Intelligence (Volume 9, Issue 1)
DOI 10.11648/j.ajai.20250901.16
Page(s) 55-67
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

AI Work Quantization, Computational Effort, Smart Cities, AI Taxation, AI Sustainability, IoT, Cloud AI

References
[1] Regulation EU. URL:
[2] Landauer, Rolf. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3), 183–191.
[3] Bennett, C.H.. (1982). The thermodynamics of computation—a review. Inter-national Journal of Theoretical Physics, 21(12), 905–940.
[4] Talkner, Peter and Campisi, Michele and Hä. (2009). Fluctuation theorems in driven open quantum systems. Journal of Statistical Mechanics: Theory and Experiment, 2009(02), P02025. IOP Publishing.
[5] Campisi, M. and Talkner, P. and Hä. (2011). Colloquium: Quantum fluctuation relations: Foundations and applications. Reviews of Modern Physics, 83(3), 771–791.
[6] Strubell, Emma and Ganesh, Ananya and McCallum, Andrew. (2020). Energy and policy considerations for modern deep learning research. arXiv preprint arXiv: 1906.02243, 34(09), 13693–13696.
[7] Schwartz, R. and Dodge, J. and Smith, N.A. and Etzioni, O.. (2021). Green AI. Com-munications of the ACM, 64(12), 54–63.
[8] Li, M. and Vitá. (2008). An Introduction to Kolmogorov Complexity and Its Applications. Springer.
[9] Kaplan, Jared and McCandlish, Sam and Henighan, Tom and others. (2020). Scaling Laws for Neural Language Models. arXiv preprint arXiv: 2001.08361.
[10] Hoffmann, Jordan and others. (2022). Interpreting the Scaling Laws for Neural Language Models. arXiv preprint arXiv: 2206.00364.
[11] Erdil, Melih and Besiroglu, Semih. (2022). Algorithmic Progress in Computer Vision. arXiv preprint arXiv: 2203.12345.
[12] Hager, Georg and Wellein, Gerhard. (2010). Introduction to High Performance Computing for Scientists and Engineers. CRC Press. URL:
[13] Rahman, A. and Others. (2024). Life–Cycle Emissions of AI Hardware: A Cra-dle–To–Grave Approach and Generational Trends. ArXiv preprint arXiv: 2402.01671.
[14] Masanet, Eric and Shehabi, Arman and Smith, Salim and Koomey, Jonathan. (2020). Recalibrating global data center energy–use estimates. Science, 367(6481), 984–986. American Association for the Advancement of Science.
[15] Shehabi, Arman and Smith, Salim and Sartor, Dale and Brown, Robert E and Herrlin, Mi-chael and Koomey, Jonathan and Lintott, Andrew and Masanet, Eric. (2016). Data center growth in the United States: decoupling the demand for services from electricity use. Energy Policy, 94, 461–472. Elsevier.
[16] Sevilla, Jaime and Heim, Lennart and Ho, Anson and Besiroglu, Tamay and Hobbhahn, Marius and Villalobos, Pablo. (2022). Compute Trends Across Three Eras. 2022 International Joint Confer-ence, 1–8.
[17] Hou, Jilei. (2019). Qualcomm: Here’s why quantization matters for AI. URL:
[18] Siddegowda, Sangeetha and Fournarakis, Marios and Nagel, Markus and Blankevoort, Tijmen and Patel, Chirag and Khobare, Abhijit. Neural Network Quantization.
[19] Sibai, Fadi N. and Asaduzzaman, Abu and El–Moursy, Ali. Characterization and Machine Learning Classification, 12, 83858–83875.
[20] Susskind, Zachary and Arden, Bryce and John, Lizy K. and Stockton, Patrick and John, Eugene B.. Neuro–Symbolic AI.
[21] Lang, Jiedong and Guo, Zhehao and Huang, Shuyu. A Compre-hensive Study.
[22] Dongarra, Jack and Gunnels, John and Bayraktar, Harun and Haidar, Azzam and Ernst, Dan. Hardware Trends Impacting Floating–Point Computations In Scientific Applications.
[23] Mutschler, Ann. The Murky World Of AI Benchmarks. URL:
[24] Ho, Anson and Besiroglu, Tamay and Erdil, Ege and Owen, David and Rahman, Robi and Guo, Zifan Carl and Atkinson, David and Thompson, Neil and Sevilla, Jaime. Algorithmic Progress in Language Models.
[25] Amodei, Dario and Hernandez, Danny. (2018). AI and Compute. URL: https://openai.com/blog/ai–and– compute/
[26] Lacoste, Anne and others. (2021). CarbonTracker: Tracking the Carbon Footprint of Training Deep Learning Models. Advances in Neural Information Processing Systems.
[27] Han, Song and Mao, Huizi and Dally, William J. (2015). Deep Compression: Compressing Deep Neural Networks withPruning, TrainedQuantizationandHuffmanCoding. International Conference on Learning Rep-resentations (ICLR).
[28] Dean, Jeffrey and Corrado, Greg and Monga, Rajat and Chen, Kai and Devin, Matthieu and Le, Quoc V and Mao, Mark Z and Ranzato, Marc’Aurelio and Senior, Andrew and Tucker, Paul and others. (2012). Large Scale Distributed Deep Networks. Advances in neural information processing systems, 25. URL:
[29] Bianchini, Monica and Frasconi, Paolo and Gori, Marco and Maggini, Marco and others. (1998). Optimal learning in artificial neural networks: A theoretical view. Neural network systems techniques and applications, 143, 1–51. Academic Press New York. URL:
[30] He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian. (2016). Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778.
[31] Tang, Lei and Liu, Huan. (2012). Scalable learning of collective behavior. IEEE Transactions on Knowledge and Data Engineering, 24(6), 1080–1091. IEEE.
[32] Z. Shaukat and S. Ali and Q. U. A. Farooq and C. Xiao and S. Sahiba and A. Ditta. (2020). Cloud–based efficient scheme for handwritten digit recognition. Multimedia Tools and Applications, 79, 29537–29549.
[33] Aghion, Philippe and Antonin, Cé. (2020). What are the labor and product market effects of automation? New evidence from France. URL:
[34] Herculano–Houzel, Suzana. (2011). Scaling of brain metabolism with a fixed energy budget per neuron: Implications for neuronal activity, plasticity, and evolution. Proceedings of the National Academy of Sciences, 108(10), 4230–4235. National Academy of Sciences.
[35] Raichle, Marcus E. and Gusnard, Debra A. (2002). Appraising the brain’s energy budget. Proceedings of the National Academy of Sciences, 99(16), 10237-10239. National Academy of Sciences.
[36] Thorpe, Simon and Fize, Didier and Marlot, Catherine. (1996). Speed of pro-cessing in the human visual system. Nature, 381(6582), 520–522. Nature Publishing Group.
[37] TOP500. (2023). Top500 Supercomputer Rankings - June 2023. URL:
Cite This Article
  • APA Style

    Sharma, A. K., Bidollahkhani, M., Kunkel, J. M. (2025). AI Work Quantization Model: Closed-System AI Computational Effort Metric. American Journal of Artificial Intelligence, 9(1), 55-67. https://doi.org/10.11648/j.ajai.20250901.16

    Copy | Download

    ACS Style

    Sharma, A. K.; Bidollahkhani, M.; Kunkel, J. M. AI Work Quantization Model: Closed-System AI Computational Effort Metric. Am. J. Artif. Intell. 2025, 9(1), 55-67. doi: 10.11648/j.ajai.20250901.16

    Copy | Download

    AMA Style

    Sharma AK, Bidollahkhani M, Kunkel JM. AI Work Quantization Model: Closed-System AI Computational Effort Metric. Am J Artif Intell. 2025;9(1):55-67. doi: 10.11648/j.ajai.20250901.16

    Copy | Download

  • @article{10.11648/j.ajai.20250901.16,
      author = {Aasish Kumar Sharma and Michael Bidollahkhani and Julian Martin Kunkel},
      title = {AI Work Quantization Model: Closed-System AI Computational Effort Metric},
      journal = {American Journal of Artificial Intelligence},
      volume = {9},
      number = {1},
      pages = {55-67},
      doi = {10.11648/j.ajai.20250901.16},
      url = {https://doi.org/10.11648/j.ajai.20250901.16},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20250901.16},
      abstract = {The rapid adoption of AI-driven automation in IoT environments, particularly in smart cities and industrial systems, necessitates a standardized approach to quantify AI’s computational workload. Existing methodologies lack a consistent framework for measuring AI computational effort across diverse architectures, posing challenges in fair taxation models and energy-aware workload assessments. This study introduces the Closed-System AI Computational Effort Metric, a theoretical framework that quantifies real-time computational effort by incorporating input/output complexity, execution dynamics, and hardware-specific performance factors. The model ensures comparability between AI workloads across traditional CPUs and modern GPU/TPU accelerators, facilitating standardized performance evaluations. Additionally, we propose an energy-aware extension to assess AI’s environmental impact, enabling sustainability-focused AI optimizations and equitable taxation models. Our findings establish a direct correlation between AI workload and human productivity, where 5 AI Workload Units equate to approximately 60-72 hours of human labor-exceeding a full-time workweek. By systematically linking AI computational effort to human labor, this framework enhances the understanding of AI’s role in workforce automation, industrial efficiency, and sustainable computing. Future work will focus on refining the model through dynamic workload adaptation, complexity normalization, and energy-aware AI cost estimation, further broadening its applicability in diverse AI-driven ecosystems.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - AI Work Quantization Model: Closed-System AI Computational Effort Metric
    AU  - Aasish Kumar Sharma
    AU  - Michael Bidollahkhani
    AU  - Julian Martin Kunkel
    Y1  - 2025/06/21
    PY  - 2025
    N1  - https://doi.org/10.11648/j.ajai.20250901.16
    DO  - 10.11648/j.ajai.20250901.16
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 55
    EP  - 67
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20250901.16
    AB  - The rapid adoption of AI-driven automation in IoT environments, particularly in smart cities and industrial systems, necessitates a standardized approach to quantify AI’s computational workload. Existing methodologies lack a consistent framework for measuring AI computational effort across diverse architectures, posing challenges in fair taxation models and energy-aware workload assessments. This study introduces the Closed-System AI Computational Effort Metric, a theoretical framework that quantifies real-time computational effort by incorporating input/output complexity, execution dynamics, and hardware-specific performance factors. The model ensures comparability between AI workloads across traditional CPUs and modern GPU/TPU accelerators, facilitating standardized performance evaluations. Additionally, we propose an energy-aware extension to assess AI’s environmental impact, enabling sustainability-focused AI optimizations and equitable taxation models. Our findings establish a direct correlation between AI workload and human productivity, where 5 AI Workload Units equate to approximately 60-72 hours of human labor-exceeding a full-time workweek. By systematically linking AI computational effort to human labor, this framework enhances the understanding of AI’s role in workforce automation, industrial efficiency, and sustainable computing. Future work will focus on refining the model through dynamic workload adaptation, complexity normalization, and energy-aware AI cost estimation, further broadening its applicability in diverse AI-driven ecosystems.
    VL  - 9
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Gesellschaft Fr Wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG), Goettingen, Germany

  • Faculty of Mathematics and Computer Science, Georg-August-University of Goettingen, Goettingen, Germany

  • Faculty of Mathematics and Computer Science, Georg-August-University of Goettingen, Goettingen, Germany; Gesellschaft Fr Wissenschaftliche Datenverarbeitung mbH Goettingen (GWDG), Goettingen, Germany

  • Sections