Information, Computing and Intelligent systems https://itvisnyk.kpi.ua/ <p><img src="https://itvisnyk.kpi.ua/public/site/images/iryna_klymenko/homepageimage-en-us-f.jpg" alt="" width="210" height="268" align="left" hspace="8" /></p> <p>The <strong>"Information, Computing and Intelligent systems"</strong> journal is the legal successor of the Collection "Bulletin of NTUU "KPI".</p> <p>Informatics, Management and Computer Engineering", which was founded in 1964 at the Faculty of Informatics and Computer Engineering.</p> <p><a href="https://portal.issn.org/resource/ISSN/2708-4930">ISSN 2708-4930 (Print), </a><a href="https://portal.issn.org/resource/ISSN/2786-8729">ISSN 2786-8729 (Online)</a></p> <p><strong>The founder</strong> is the National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic Institute"</p> <p><strong>Journal Abbreviation:</strong> Inf. Comput. and Intell. syst. j.</p> en-US iklymenko.fict@gmail.com (Заступник головного редактора Клименко Ірина Анатоліївна) icisj.ua@gmail.com (Відповідальний секретар Міщенко Людмила Дмитрівна) Sat, 27 Dec 2025 22:06:12 +0200 OJS 3.2.1.2 http://blogs.law.harvard.edu/tech/rss 60 Intelligent traffic management method in software-defined networks based on behavioral classification and adaptive priority service https://itvisnyk.kpi.ua/article/view/334049 <p>The growing complexity of modern enterprise network environments demands sophisticated traffic management solutions that can provide quality of service (QoS) guarantees for encrypted and heterogeneous flows. Existing traffic management approaches face significant challenges when dealing with encrypted protocols and diverse application requirements, resulting in performance degradation for critical services and inefficient resource utilization. This paper addresses the problem of intelligent traffic management in software-defined networks through behavioral classification and adaptive priority service mechanisms.</p> <p>The study examines the development and implementation of an integrated traffic management method that combines behavioral deep packet inspection, class-based queuing, and weighted random early detection algorithms. The research investigates how behavioral flow characteristics remain observable in encrypted traffic environments and how these patterns can be leveraged for effective QoS provisioning. The proposed method utilizes packet timing patterns, connection behaviors, and flow statistics to classify traffic without relying on payload inspection or predefined port assignments.</p> <p>Experimental validation through discrete-event simulation demonstrates significant performance improvements compared to traditional first-in-first-out mechanisms. The behavioral classification component achieves over 95% classification accuracy. The experimental results demonstrate up to 97.5% improvement in latency performance and 0% packet loss for high-priority traffic.</p> <p>Integrating behavioral traffic recognition with adaptive queue management within a programmable network framework provides an effective and innovative approach to maintaining stable service quality in encrypted, multi-service environments. The proposed method is compatible with existing software-defined network controllers and can be deployed without modification of application protocols or infrastructure components.</p> Dmytro Oboznyi, Yurii Kulakov Copyright (c) 2025 Information, Computing and Intelligent systems https://itvisnyk.kpi.ua/article/view/334049 Sat, 27 Dec 2025 00:00:00 +0200 Approach to hybrid load management in Fat-Tree web clusters https://itvisnyk.kpi.ua/article/view/338564 <p>The paper presents an approach to hybrid load management in a web cluster that is capable of providing adaptive request balancing based on load prediction and resilience to random web server failures. The proposed architecture is built upon the Fat-Tree topology, which ensures high scalability, structural redundancy, and efficient routing within the cluster network. The developed system performs load forecasting using moving average methods and Erlang-based queueing models, enabling the estimation of overload probabilities and proactive redistribution of computational resources. Four representative simulation scenarios were analyzed: baseline load, peak load, dynamic traffic variations, and random server failures. The obtained results demonstrate enhanced system reliability, reduced average response time, and more balanced utilization of cluster resources. In the context of rapidly growing web services and user traffic volumes, the issue of maintaining high reliability and efficiency of clustered infrastructures becomes increasingly significant. Even with robust topologies such as Fat-Tree, irregular traffic patterns and sudden surges in client requests can cause local overloads and performance degradation. Random node failures further complicate cluster management, necessitating the use of adaptive and predictive control mechanisms. The proposed model integrates Fat-Tree network simulation with statistical forecasting algorithms, forming the basis for proactive load management. This integration allows for minimizing service degradation risks, dynamically responding to workload changes, and maintaining stable operation of web infrastructures under partial node failures. The architecture shows strong potential for real-time implementation in large-scale distributed web systems. It can be further enhanced by incorporating machine learning or wavelet-based forecasting methods to improve the accuracy of load estimation and system adaptability.</p> Kostiantyn Radchenko, Artem Chernenkyi Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/338564 Sat, 27 Dec 2025 00:00:00 +0200 Optimization neural network for time series processing https://itvisnyk.kpi.ua/article/view/341480 <p>The article proposes the architecture of the optimization neural network and the model of test sample synthesis for the process of extrapolation of time series parameters. In particular, the addition of an input layer with the introduction of an optimization scheme of nonlinear trade-offs has been implemented. Extrapolation of the behavior of the time series was carried out according to a test sample, which is formed as a data model with the selection of the trend according to the method of least squares. The scientific novelty of the results obtained in the article is reflected in the essence of these decisions.</p> <p>The aim of the research is to develop an optimization network architecture and data model for extrapolation, which allows to improve the accuracy and time of predicting the behavior of the time series outside the observation interval. Subject of research: architecture of an artificial neural network and methods of extrapolation of time series. Object of research: processes of architectural synthesis of an artificial neural network and extrapolation of time series behavior outside the observation interval.</p> <p>The optimization layer provides mini-requirements for the approximation of training and test samples. This is especially appropriate for time series with stochastic noise and allows you to reduce the impact of random errors on time series prediction results. The use of model data for extrapolation allows you to determine the behavior of the time series outside the observation interval. At the same time, the forecasting time with acceptable accuracy characteristics increases. These solutions are reflected in the name of the optimization neural network, which is proposed by the authors. The study of the effectiveness of the proposed solutions was implemented by methods of simulation modeling on a modified artificial neural network. The results of the calculations proved an increase in the adequacy of data models and an increase in the accuracy of extrapolation.</p> Danylo Baran, Oleksii Pysarchuk Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341480 Sat, 27 Dec 2025 00:00:00 +0200 Comparative analysis of LCNet050 and MobileNetV3 architectures in hybrid quantum–classical neural networks for image classification https://itvisnyk.kpi.ua/article/view/333887 <p>This study explores the impact of classical backbone architecture on the performance of hybrid quantum-classical neural networks in image classification tasks. Hybrid models combine the representational power of classical deep learning with the potential advantages of quantum computation. Specifically, this research employs a quanvolutional neural network architecture in which a quantum convolutional layer, based on a four-qubit Ry circuit, preprocesses input images before classical processing. </p> <p>Despite the growing interest in hybrid models, few studies have systematically investigated how variations in classical architecture design affect the overall performance of hybrid quantum-classical neural networks. To address this gap, we compare two lightweight convolutional backbones – MobileNetV3Small050 and LCNet050 – integrated with an identical quantum preprocessing layer. Both models are evaluated on the CIFAR-10 dataset using 5-fold stratified cross-validation. Performance is assessed using multiple metrics, including accuracy, macro- and micro-averaged area under the curve, and class-wise confusion matrices. </p> <p>The results indicate that the LCNet-based hybrid model consistently outperforms its MobileNet counterpart, achieving higher overall accuracy and area under the curve scores, along with improved class balance and robustness in distinguishing less-represented classes. These findings underscore the critical role of classical backbone selection in hybrid quantum-classical architectures. While the quantum layer remains fixed, the synergy between quantum preprocessing and classical feature extraction significantly affects model performance.</p> <p>This study contributes to a growing body of work on quantum-enhanced learning systems by demonstrating the importance of classical design choices. Future research may extend these insights to alternative datasets, deeper or transformer-based backbones, and more expressive quantum circuits.</p> Arsenii Khmelnytskyi, Yuri Gordienko Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/333887 Sat, 27 Dec 2025 00:00:00 +0200 Evaluation of the effectiveness of two approaches to building damage detection with satellite imagery https://itvisnyk.kpi.ua/article/view/341475 <p>This study addresses the approaches for satellite image analysis to assess infrastructure damage. The main aim is to conduct a comprehensive comparative analysis of the effectiveness of two key machine learning approaches: specialized semantic segmentation based on the U-Net architecture and generalized visual analysis using large vision-language models. The object of the research is the process of quantitatively benchmarking these two distinct approaches to determine their practical applicability for multi-class damage classification.</p> <p>The research material is the publicly available xView2 dataset. The methods involved two parallel experiments. For the semantic segmentation approach, a U-Net model with an EfficientNet-B4 encoder was implemented and trained on 6-channel input data ("before" and "after" images) using a combined Dice and Focal loss function. For the vision-language models approach, the open-source LLaVA-1.5-7B model was evaluated in a zero-shot mode using advanced prompt engineering for an aggregative counting task. To enable a direct comparison, the standard Jaccard index was calculated based on the aggregated object counts for each damage class.</p> <p style="font-weight: 400;">The results of the experiments revealed a significant performance disparity. The specialized U-Net model demonstrated high effectiveness, achieving an intersection over union score of 0.6141 on the test set. In contrast, the LLaVA model proved unsuitable for accurate quantitative analysis, yielding an extremely low Jaccard index of approximately 0.063, primarily due to its systemic failure to correctly identify and count objects (Recall ≈ 0.07). The scientific novelty lies in being the first study to quantitatively document this order-of-magnitude capability gap, confirming that for tasks requiring high-precision mapping, specialized segmentation models remain the indispensable tool.</p> Oleksii Rumiantsev, Yurii Oliinyk Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341475 Sat, 27 Dec 2025 00:00:00 +0200 DDOS attack detection with data imperfections using machine learning algorithms https://itvisnyk.kpi.ua/article/view/334076 <p>The issue of DDoS (Distributed Denial of Service) attacks remains a prevalent one even in recent years. Modern environment is highly dynamic and is characterized by a large amount of traffic flow. Existing research covers several models, techniques and approaches to detecting DDoS traffic, which aim to optimize the detection in controlled datasets. However, unintentional noise or data corruption may lower the efficacy of such methods. As such, determining most effective ways to detect DDoS traffic in conditions of data imperfections is necessary for reliable network performance.</p> <p>Therefore, the object of this research Is the usage of machine learning algorithms for detection of incoming DDoS attacks. The purpose of this research is to determine the performance of ways to detect incoming DDoS attacks with machine learning algorithms based on detection accuracy, while simulating imperfect data conditions. The study also examines the impact of class rebalancing on modified data. To achieve the aim of this research a variety of machine learning algorithms were implemented and tested on a CIC-DDoS2019 dataset. The data is modified by removing values and introducing noise, tested, the classes are resampled and the dataset is tested again. The goal is to achieve over 90% accuracy in a classification task of the type of DDoS attack and to determine how much the changes affect the performance of the algorithms.</p> <p>The results of the testing indicated that several solutions reach the target mark and changes to the dataset in realistic conditions do not significantly affect the final result. However, all models tested show a decrease in accuracy compared to unmodified data with more complex models showing higher resilience (smaller decrease in accuracy). In addition, resampling of the data shows comparable decrease in accuracy of the models with more complex models being affected less.</p> <p>The results of this study may be used in development of an algorithm of repairing the corrupted data or development of models more resistant to such data changes. Additionally, the results of this study may be used when considering models for practical implementations of a DDoS traffic classification system.</p> Artem Dremov, Artem Volokyta Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/334076 Sat, 27 Dec 2025 00:00:00 +0200 UAeroNet: domain-specific dataset for automation of unmanned aerial vehicles https://itvisnyk.kpi.ua/article/view/341779 <p>This paper addresses the challenges and key principles of designing domain-specific datasets that can be used especially for automation of unmanned aerial vehicles. Such datasets play a key role in building intelligent systems that enable autonomous operation and support data-driven decisions. The study presents approaches we used for data collection, analysis and annotation, highlighting their importance and practical impact on real-world application. The preparation of a domain-specific dataset for automating unmanned aerial vehicles operations (such as navigation and environmental monitoring) is a challenging task due to frequently low image resolution, complex weather conditions, a wide range of object scales, background noise and heterogeneous terrain landscapes. Existing open datasets typically cover only a limited variety of unmanned aerial vehicles use cases, which restricts the ability of deep learning models to perform adequately under non-standard or unpredictable conditions.</p> <p>The object of the study is video data acquired by unmanned aerial vehicles for creating domain-specific datasets that enable machine learning models to perform autonomous object recognition, navigation, obstacle avoidance and interaction with an environment with minimal operator involvement. The subject focuses on the collection, preparation and annotation of video data acquired by unmanned aerial vehicles. The purpose of the study is to develop and systematize workflow for creating specialized datasets to train robust models capable of autonomously recognizing objects in real-time video captured by unmanned aerial vehicles. To achieve this goal, a workflow was designed for collecting and annotating video data, raw video data were acquired from unmanned aerial vehicles sensors and manually annotated using the Computer Vision Annotation Tool. </p> <p>As a result of this work, we developed a domain-specific dataset (UAeroNet) using an open-source annotation tool for object tracking task in real scenarios. UAeroNet consists of 456 annotated tracks and a total of 131 525 labeled instances that belong to 13 distinct classes.</p> Yuriy Kochura, Yevhenii Trochun, Vladyslav Taran, Yuri Gordienko, Oleksandr Rokovyi, Sergii Stirenko Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341779 Sat, 27 Dec 2025 00:00:00 +0200 Deep Q-learning policy optimization method for enhancing generalization in autonomous vehicle control https://itvisnyk.kpi.ua/article/view/341723 <p>The development of autonomous vehicle control policies based on deep reinforcement learning is a principal technical problem for cyber-physical systems, fundamentally constrained by the high dimensionality of state spaces, inherent algorithmic instability, and a pervasive risk of policy over-specialization that severely limits generalization to real-world scenarios. The object of this investigation is the iterative process of forming a robust control policy within a simulated environment, while the subject focuses on the influence of specialized reward structures and initial training conditions on policy convergence and generalization capability. The study's aim is to develop and empirically evaluate a deep Q-learning policy optimization method that utilizes dynamic initial conditions to mitigate over-specialization and achieve stable, globally optimal adaptive control. The developed method formalizes two optimization criteria. First, the adaptive reward function serves as the safety and convergence criterion, defined hierarchically with major penalties for collision, intermediate incentives for passing checkpoints and a continuous minor penalty for elapsed time to drive efficiency. Second, the mechanism of dynamic initial conditions acts as the policy generalization criterion, designed to inject necessary stochasticity into the state distribution. The agent is modeled as a vehicle equipped with an eight-sensor system providing 360 degrees coverage, making decisions from a discrete action space of seven options. Its ten-dimensional state vector integrates normalized sensor distance readings with normalized dynamic characteristics, including speed and angular error. Empirical testing confirmed the policy's vulnerability under baseline fixed-start conditions, where the agent demonstrated over-specialization and stagnated at a traveled distance of approximately 960 conventional units after 40,000 episodes. The subsequent application of the dynamic initial conditions criterion successfully addressed this failure. By forcing the agent to rely on its generalized state mapping instead of trajectory memory, this approach successfully overcame the learning plateau, enabling the agent to achieve full, collision-free track traversal between 53,000 and 54,000 episodes. Final optimization, driven by penalty, reduced the total track completion time by nearly half. This verification confirms the method's value in producing robust, stable, and efficient control policies suitable for integration into autonomous transport cyber-physical systems.</p> Andrii Pysarenko, Mykhailo Drahan Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341723 Sat, 27 Dec 2025 00:00:00 +0200 Methodology of adaptive data processing in IоT monitoring systems with multilevel sensor data filtering and self-tuning https://itvisnyk.kpi.ua/article/view/341409 <p>The study focuses on the processes of collecting and preprocessing heterogeneous sensor data. The aim of the research is to develop a method of adaptive filtering and automatic trigger adjustment that ensures stable operation of IoT monitoring systems in the presence of noise, impulse outliers, and seasonal fluctuations.</p> <p>A methodology for adaptive data processing is proposed, combining multi-level data filtering with automatic self-adjustment of control thresholds in monitoring systems. This approach not only improves the accuracy of real-time sensor measurements but also dynamically adapts the monitoring system parameters to changing operating conditions, thereby minimizing the number of false incidents.</p> <p>Within the study, a model of multi-level filtering was formalized, based on a median filter, a moving-average filter, and an exponential smoothing method. The use of a multi-level filter provides comprehensive data cleansing, stabilization of time series, and extraction of key trends. A mechanism for automatic adjustment of control thresholds in the Zabbix monitoring system was developed, where threshold values are determined based on statistical parameters and trends identified at the multi-level filtering stage. This mechanism integrates into the subsequent data-processing pipeline, ensuring that the system automatically accounts for daily, seasonal, and other fluctuations of the dynamic data-collection environment.</p> <p>Experimental studies involving various types of sensors confirmed improved measurement accuracy and a significant reduction in false alerts in the monitoring system. In particular, humidity-measurement accuracy improved by an average of 6.52%, while impulse temperature spikes were reduced by 53.06%. Compared to traditional approaches, the proposed methodology provides higher noise resilience and adaptability to changing environmental conditions, making it an effective solution for industrial, environmental, and other real-time IoT systems.</p> Anatolii Haidai, Iryna Klymenko Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341409 Sat, 27 Dec 2025 00:00:00 +0200 Method for combining CNN-based features with geometric facial descriptors in emotion recognition https://itvisnyk.kpi.ua/article/view/333629 <p>This study presents a method for combining CNN-based visual features with geometric facial descriptors to improve the accuracy of emotion recognition in static images. The method integrates deep convolutional embeddings extracted from a pre-trained ResNetV2_101 model within the ML.NET framework with handcrafted geometric features computed from facial landmarks. Open-source datasets containing labeled emotional categories were used for experiments. At the first stage, deep image embeddings were obtained through transfer learning. At the second stage, 68 facial landmarks were detected to calculate distances and proportional relationships such as interocular distance, mouth width, eyebrow height, and other geometry-based indicators. These visual and geometric representations were concatenated into a unified feature space and classified using a multiclass linear model. The hybrid method achieved approximately 4% higher accuracy than the baseline CNN model relying solely on pixel-level features (from about 63% to 67%), confirming that combining heterogeneous features enhances generalization and robustness. The results also highlight that geometric descriptors act as stabilizing factors, compensating for noise, occlusions, and lighting variations that degrade CNN-only models. The developed pipeline demonstrates the feasibility of integrating interpretable geometric cues with deep embeddings directly in C# using ML.NET. The research novelty lies in proposing an interpretable hybrid model for emotion recognition that improves reliability while maintaining compatibility with .NET-based applications. The approach offers an accessible solution for developers working within enterprise .NET ecosystems, enabling direct deployment without cross-language integration. Future research will focus on extending the model toward multimodal emotion analysis that incorporates speech, gesture, and physiological signals to enhance contextual understanding of affective states. Additionally, the hybrid model can serve as a diagnostic tool for studying emotion dynamics in psychological or behavioral research.</p> Liudmyla Zichenko Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/333629 Sat, 27 Dec 2025 00:00:00 +0200 Optimized syntax concept for variable scoping, loop structures, and flow control in programming language https://itvisnyk.kpi.ua/article/view/341609 <p>This article examines syntactic redundancy in modern programming languages and its impact on code perception, readability, and logical consistency. The object of the study is the analysis of redundant syntactic constructs, particularly those related to variable declarations, scope management, loop structures, and flow control mechanisms. The primary aim is to develop and substantiate an optimized syntax concept. This concept combines the declarative rigor of classical languages with the simplicity of dynamic systems. The goal is to reduce code redundancy and improve cognitive ergonomics for developers.</p> <p>The research methodology involved a comparative analysis of key syntactic elements across different language paradigms. The materials for the study included a formal comparison of semantics and an evaluation of equivalent program fragments written in classical languages and in the proposed conceptual language.</p> <p>The results show that the proposed syntactic model significantly reduces auxiliary symbols, improves code clarity, and lowers cognitive load. The scientific novelty is a holistic syntax model defined by three key innovations. First, a simplified variable management system creates local variables automatically, eliminating keywords like var or global and using explicit markers for outer-scope access. Second, a universal loop operator unifies the functionality of traditional for, while, and do-while loops, allowing condition evaluation at the beginning, middle, or end of the block. Third, the traditional goto operator is replaced with a structured try-throw construct, providing a safe, semantically coherent mechanism for exiting nested blocks and error handling. This unified approach forms a basis for further research into minimalist syntax focused on naturalness and readability.</p> Oleksandr Zhyrytovskyi, Roman Zubko Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341609 Sat, 27 Dec 2025 00:00:00 +0200 A multifactor model for detecting propaganda in textual data https://itvisnyk.kpi.ua/article/view/342630 <p>Detecting elements of propaganda in large volumes of textual data is currently one of the key tools in combating the information warfare taking place worldwide. This paper presents a multifactor model for determining the level of propaganda in a publication. The analyzed publications included text-based news articles and social media posts, which were processed using both quantitative and semantic text analysis methods. The model was constructed using the method of linear convolution, which enables the integration of multiple heterogeneous indicators into a unified value reflecting the degree of propaganda.</p> <p>The proposed model considers thirteen indicators, each of which, when exhibiting a high value, signals the potential presence of propaganda within a text. The indicators encompass lexical, syntactic, and semantic characteristics such as emotional tone, subjective evaluation, presence of manipulative triggers, and calls to action. The value of each indicator was calculated using methods of statistical analysis, intelligent data analysis, and machine learning. An algorithm for determining the influence level of each factor was proposed, as well as a scale for assessing the overall level of propaganda. For every analyzed publication, a utility function value was computed to quantify its propaganda intensity. The threshold value of this utility function – beyond which a publication is considered propagandistic – was defined as the sample mean across the dataset. This approach allows for an objective classification of textual materials without the need for expert labeling. The advantage of the developed method lies in the fact that each indicator is derived exclusively from empirical statistical data and validated computational procedures, ensuring the elimination of human subjectivity. The study demonstrates that the modified multifactor model can serve as a universal analytical tool for detecting propaganda in various types of textual data, thereby enhancing the transparency and reliability of media content analysis.</p> Olena Gavrilenko, Kyryl Feshchenko Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/342630 Sat, 27 Dec 2025 00:00:00 +0200 Multi-strategy AJAX and event-driven state management for responsive web applications https://itvisnyk.kpi.ua/article/view/341787 <p>Research addresses engineering high-performance, responsive web apps for complex data and real-time user interaction. The study focuses on the client-server integration in a monolithic Django pattern/architecture, specifically the orchestration of asynchronous client technologies, for instance, AJAX, JavaScript, and server logic, for instance, Python/Django. The goal is to design, implement, and validate a Unified AJAX Integration Framework. This framework enables seamless real-time data exchange, dynamic updates, and complex state management for diverse components: interactive tables, multi-dimensional charts, multi-step forms, and the Checkout Session Container. Django framework, jQuery for AJAX, and JavaScript libraries (Chart.js, DataTables) are included as materials. Methods applied involve systematic software architecture design, asynchronous programming analysis, RESTful API development, and empirical performance benchmarking of data-loading and state management strategies. Scientific contribution is twofold. Firstly, Multi-Strategy AJAX Integration Model is formalized as a decision framework that dynamically selects between server-side rendering (django-tables2), client-side rendering (vanilla jQuery/DataTables), and a hybrid AJAX-Datatable approach based on data complexity, volume, and interaction. Secondly, Event-Driven State Management System as a robust design for distributed, session-based UI components using a centralized AJAX action dispatcher and a universal state synchronization function. This ensures data consistency across independent page components and eliminates race conditions in concurrent operations. As a result, the framework achieved a significant reduction in server load and perceived latency. The benchmarked components consistently showed sub-200ms response times for datasets over 10,000 records. The cart system handled over 1,000 consecutive operations without any state desynchronization.</p> Nataliia Rudnikova, Oleksii Nedashkivskyi Copyright (c) 2025 Information, Computing and Intelligent systems https://creativecommons.org/licenses/by/4.0/ https://itvisnyk.kpi.ua/article/view/341787 Sat, 27 Dec 2025 00:00:00 +0200