E-mail: editor@ijeetc.com; nancy.liu@ijeetc.com
6.82024CiteScore 83rd percentilePowered by
Prof. Pascal Lorenz
University of Haute Alsace, FranceIt is my honor to be the editor-in-chief of IJEETC. The journal publishes good papers which focus on the advanced researches in the field of electrical and electronic engineering & telecommunications.
2026-01-15
2025-11-10
Manuscript received September 4, 2025; revised October 18, 2025; accepted October 22, 2025
Abstract—As generative Artificial Intelligence (AI) models become increasingly integrated into software development workflows, understanding their efficiency and code quality is critical. This study offers a comprehensive comparison of three leading AI models—ChatGPT GPT-4-turbo, Claude Sonnet, and DeepSeek-V3—for automated code generation, focusing specifically on sorting algorithms. The models are evaluated across multiple metrics including execution time, memory usage, peak memory consumption, logical and physical file sizes, and code readability. Python implementations of Insertion Sort, Merge Sort, Quick Sort, and Heap Sort are generated by each model and benchmarked in a consistent Linux Docker environment. Results reveal that ChatGPT leads in overall efficiency, with the fastest average execution time, the lowest peak memory usage, and the highest readability scores. DeepSeek demonstrated competitive performance, especially in producing readable code, while Claude showed higher memory consumption and lower readability. This analysis provides practical insight into the trade-offs between code quality and system performance in AI-generated programming, offering valuable guidance for researchers and developers alike.