Efficiency Assessment of the Artificial Intelligence Market: Exploring the Limits
PDF Рус (Русский)
PDF

Keywords

artificial intelligence
generative AI
DEA analysis
AI efficiency
AI investments
semiconductors market
AI market
AI Investments
large language models
AI economic impact

How to Cite

KouzminovY., & KruchinskaiaE. (2025). Efficiency Assessment of the Artificial Intelligence Market: Exploring the Limits. Foresight and STI Governance, 19(4), 6-16. https://doi.org/10.17323/fstig.2025.29079

Abstract

The development of Artificial Intelligence (AI) is significantly impacting the global economy, transforming corporate strategies and enhancing operational efficiency. This study aims to analyze the relative efficiency of the Generative AI (GenAI) market, considering the market size of chips, servers, and data center infrastructure required for its operation, and comparing these market sizes with the market size of AI solutions. The study hypothesizes that the current AI market, despite its rapid development, is characterized by a catching-up nature compared to the component market and does not yet fully reflect the proportional relationship between the volumes of these markets (the hardware market and the AI solutions market).

It is emphasized that the capital expenditures of technology giants on the creation of AI infrastructure have significantly increased, which may require decades to achieve a balance between the size of the hardware market that supports AI and the size of the AI solutions market itself. To assess the efficiency of the AI market, the Data Envelopment Analysis (DEA) methodology is applied, considering «inputs» (the market size of components) and "outputs" (the market size of AI solutions). The results of the DEA analysis of the GenAI market dynamics from 2016 to 2024 reveal a non-linear nature of development, starting in 2021, with a trend reversal and a decrease in efficiency indicators, which confirms the hypothesis of the catching-up nature of AI technologies compared to the component market. It is shown that fluctuations in efficiency begin three years after the deployment of the first large language models, indicating their significance for the demand for hardware, but not yet demonstrating sufficient returns in the form of a comparable growth of the AI solutions market.

The limitations of the study are associated with the time interval of analysis (2016-2024) and the composition of the companies included in the analysis, which covers a majority, but not the entire, market. The novelty of the study lies in the application of DEA analysis for a comprehensive assessment of the AI market considering, but divides the component market and the technological solutions market of AI usage. The results obtained provide a critical assessment of the prospects for the development of the AI market and identify an imbalance between the «soft» (technological solutions) and «hard» (components) markets, identifying the potential for more efficient exploration and use of generative models. However, the results require further development in terms of describing the effects in different sectors of the economy.

https://doi.org/10.17323/fstig.2025.29079
PDF Рус (Русский)
PDF

References

Кузьминов Я., Кручинская Е. (2024) Потенциал генеративного искусственного интеллекта для решения профессиональных задач. Форсайт, 18(4), 67–76. https://doi.org/10.17323/2500-2597.2024.4.67.76

Acemoglu D. (2025) The simple macroeconomics of AI. Econоmic Policy , 40(121), 13–58. https://doi.org/10.1093/epolic/eiae042

Aleskerov F., Petrushchenko V. (2016) DEA by sequential exclusion of alternatives in heterogeneous samples. International Journal of Information Technology & Decision Making, 15(1), 5–22. https://doi.org/10.1142/S021962201550042X

Bang Y., Cahyawijaya S., Lee N., Dai W., Su D., Wilie B., Lovenia H., Ji Z., Yu T., Chung W., Do Q.V., Xu Y., Fung P. (2023) A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity (arXiv preprint 2302.04023). https://doi.org/10.18653/v1/2023.ijcnlp-main.45

Bender E.M., Gebru T., McMillan-Major A. Shmitchell S. (2021) “On the dangers of stochastic parrots: can language models be too big? In: FAccT 2021 – Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency , New York: Association for Computing Machinery, pp. 610–623. https://doi.org/10.1145/3442188.3445922

Biderman S., Schoelkopf H., Sutawika L., Gao L., Tow J., Abbasi B., Aji A.F., Ammanamanchi P.S., Black S., Clive J., DiPofi A., Etxaniz J., Fattori B., Forde J.Z., Foster Ch., Jaiswal M., Lee W.Y., Li H., Lovering Ch., Muennighoff N., Pavlick E., Phang J., Skowron A., Tan S., Tang X., Wang K.A., Winata G.I., Yvon F., Zou A. (2024) Lessons from the trenches on reproducible evaluation of language models (arXiv Preprint 2405.14782). https://doi.org/10.48550/arXiv.2405.14782

Cai Z., Wang Y., Sun Q., Wang R., Gu C., Yin W., Lin Z., Yang Z., Wei C., Shi X., Deng K., Han X., Chen Z., Li J., Fan X., Deng H., Lu L., Li B., Liu Z., Wang Q., Lin D., Yang L. (2025) Has GPT-5 Achieved Spatial Intelligence? An Empirical Study (arXiv Preprint 2508.13142v1). https://doi.org/10.48550/arXiv.2508.13142

Challapally A., Pease C., Raskar R., Chari P. (2025) The GenAI Divide: State of AI Iin Business 2025, Cambridge, MA: MIT. Chang Y., Wang X., Wang J., Wu Y., Yang L., Zhu K., Chen Н., Yi X., Wang C., Wang Y., Ye W., Zhang Y., Chang Y., Yu P.S., Yang Q., Xie X. (2024) A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology , 15(3), 1–45. https://doi.org/10.48550/arXiv.2307.03109

CompTIA (2024) State of the Tech Workforce , Downers Grove, IL: The Computing Technology Industry Association (CompTIA).

Deng Y., Xia C. S., Peng H., Yang C., Zhang L. (2023) Large language models are zero-shot Fuzzers: fuzzing deep-learning libraries via large language models. In: ISSTA 2023 – Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, New York: Association for Computing Machinery, pp. 423–435. https://doi.org/10.1145/3597926.3598067

Fu Y., Weng Z. (2024) Navigating the ethical terrain of AI in education: A systematic review on framing responsible human-centered AI practices. Computers and Education Artificial Intelligence , 7(1), 100306. https://doi.org/10.1016/j.caeai.2024.100306

Georgiou G.P. (2025) Capabilities of GPT-5 across critical domains: Is it the next breakthrough? (arXiv preprint 2508.19259). https://doi.org/10.48550/arXiv.2508.1925

Gładysz B., Despotis D., Kuchta D. (2024) Application of data envelopment analysis to IT project evaluation, with special emphasis on the choice of inputs and outputs in the context of the organization in question. Journal of Information and Telecommunication , 8(3), 301–314. https://doi.org/10.1080/24751839.2023.2286764

Gur I., Furuta H., Huang A. V., Safdari M., Matsuo Y., Eck D., Faust A. (2024) A real-world WebAgent with planning, long context understanding, and program synthesis (arXiv preprint 2307.12856). https://doi.org/10.48550/arXiv.2307.12856

Hajikhani A., Cole C. (2024) A critical review of large language models: Sensitivity, bias, and the path toward specialized AI. Quantitative Science Studies , 5(3), 736–756. https://doi.org/10.1162/qss_a_00310

Hu X., Xu Z., Ling Z., Jin Z., Du S. (2024) Emerging Synergies Between Large Language Models and Machine Learning in Ecommerce Recommendations (arXiv preprint 2403.02760). https://doi.org/10.48550/arXiv.2403.02760

Huang Y., Tang K., Chen M.A. (2024) Comprehensive Survey on Evaluating Large Language Model Applications in the Medical Industry (arXiv preprint 2404.15777). https://doi.org/10.48550/arXiv.2404.15777

Kouzminov Y., Kruchinskaia E. (2024) The Evaluation of GenAI Capabilities to Implement Professional Tasks. Foresight and STI Governance, 18(4), 67–76. https://doi.org/10.17323/2500-2597.2024.4.67.76

Laskar M.T.R., Fu X.-Y., Chen C., Bhushan T.N.S. (2023) Building real-world meeting summarization systems using large language models: A practical perspective. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track , Singapore: Association for Computational Linguistics, pр. 343–352. https://doi.org/10.18653/v1/2023.emnlp-industry.33

Lee J., Stevens N., Han S.C., Song M. (2024) A survey of large language models in finance (finllms) (arXiv preprint 2402.02315). https://doi.org/10.1007/s00521-024-10495-6

Li Y., Wang S., Ding H., Chen H. (2023) Large language models in finance: A survey. In: Proceedings of the 4th ACM International Conference on AI in Finance , New York: Association for Computing Machinery, рр. 374–382. https://doi.org/10.1145/3604237.3626869

Liu J., Gong Y., Zhu J., Titah R. (2021) Information technology and performance: Integrating data envelopment analysis and configurational approach. Journal of the Operational Research Society , 73(6), 1278–1293. https://doi.org/10.1080/01605682.2021.1907238

Mayer H., Yee L., Chui M., Roberts R. (2025a) Superagency in the Workplace. Empowering People to Unlock AI’s Full Potential , New York: McKinsey & Company. Noffsinger J., Patel M., Sachdeva P. (2025c) The Cost of Compute: A $7 Trillion Race to Scale Data Centers, New York: McKinsey & Company.

Raza M., Jahangir Z., Riaz M.B., Saeed M.J., Sattar M.A. (2025) Industrial applications of largelanguage models. , 15, 13755. https://doi.org/10.1038/s41598-025-98483-1

Ren Q., Jiang Z., Cao J., Li C., Liu Y., Huo S., He T., Chen Y. (2024) A survey on fairness of large language models in e-commerce: progress, application, and challenge (arXiv preprint 2405.13025). https://doi.org/10.48550/arXiv.2405.13025

Saleh Y., Abu Talib M., Nasir Q., Dakalbab F. (2025) Evaluating large language models: a systematic review of efficiency, applications, and future directions. Frontiers in Computer Science , 7, 1523699. https://doi.org/10.3389/fcomp.2025.1523699

Shi J., Mei J., Zhu L., Wang Y. (2024) Estimating the Innovation Efficiency of the Artificial Intelligence Industry in China Based on the Three-Stage DEA Model. IEEE Transactions on Engineering Management , 71, 9217–9228. https://doi.org/10.1109/TEM.2023.3323292

Stanford University (2025) Artificial Intelligence Index Report 2025, Stanford, CA: The Stanford Institute for Human-Centered Artificial Intelligence.

Trott S., Jones C., Chang T., Michaelov J., Bergen B. (2023) Do large language models know what humans know? Cognitive Science , 47, 13309. https://doi.org/10.1111/COGS.13309

Wang S., Xu Т ., Li Н., Zhang С., Liang J., Tang J., Yu P.S., Wen Q. (2024) Large language models for education: A survey and outlook (arXiv preprint 2403.18105). https://doi.org/10.48550/arxiv.2403.18105

Wen H., Li Y., Liu G., Zhao S., Yu Т ., Li T. J.-J., Jiang S., Liu Y., Zhang Y., Liu Y. (2024) AutoDroid: LLM-powered Task Automation in Android . Paper presented at the ACM MobiCom’24 International Conference on Mobile Computing and Networking, September 30 – October 4, 2024. https://doi.org/10.1145/3636534.3649379 WEF (2025) The Future of Jobs Report 2025 , Geneva: World Economic Forum.

Xu H., Gan W., Qi Z., Wu J., Yu P.S. (2024) Large Language Models for Education: A Survey (arXiv preprint 2405.13001). https://doi.org/10.48550/arXiv.2405.13001

Yee L., Chui M., Roberts R., Smit S. (2025) Technology Trends Outlook 2025, New York: McKinsey & Company.

Zhao H., Liu Z., Wu Z., Li Y., Yang T., Shu P., Xu S., Dai H., Zhao L., Jiang H., Pan Y., Chen J., Zhou Y., Mai G., Liu N., Liu T. (2024) Revolutionizing finance with LLMs: An overview of applications and insights (arXiv preprint 2401.11641). https://doi.org/10.48550/arXiv.2401.11641

Zhao W.X., Zhou K., Li J., Tang T., Wang X., Hou Y., Min Y., Zhang B., Zhang J., Dong Z., Du Y., Yang C., Chen Y., Chen Z., Jiang J., Ren R., Li Y., Tang X., Liu Z., Liu P., Nie J.Y., Wen J.R. (2023) A Survey of Large Language Models (arXiv preprint 2303.18223). https://doi.org/10.48550/arXiv.2303.18223

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Downloads

Download data is not yet available.