This article provides a comparative analysis of various methods for filtering a signal obtained using a spectroradiometer. The following filtering methods were used in the study: moving average method, spline interpolation method, and Savitsky-Golay method. An Ocean Insight SR-2XR250-25 spectroradiometer was used as a spectral radiation receiver, and a white LED was used as a radiation source. Based on the results of the study, the most optimal filter for processing the results of spectral measurements of light sources was determined, which will be further used in the software of the goniospectroradiometer being developed.
Keywords: spectral density of radiation, spectroradiometer, radiation receiver, radiation source, signal filtration methods
The paper proposes a solution to geological problems using probabilistic and statistical methods. It presents the results of using spectral correlation data analysis, which involves the processing of digital geoinformation organized into three-dimensional regular networks. The possibilities of applying methods of statistical, spectral, and correlation analysis, as well as linear optimal filtering, anomaly detection, classification, and pattern recognition, are explored. Spectral correlation and statistical analysis of geodata were conducted, including the calculation of Fourier spectra, various correlation functions, and gradient characteristics of geofields.
Keywords: interprofile correlation, self-adjusting filtering, weak signal detection, geological zoning and mapping, spatially distributed information
The article discusses the development of data normalization and standardization tools using Python libraries. A description of the theoretical foundations and formulas used to normalize and standardize data is considered. For internal calculations of the developed software, the Pandas and NumPy libraries were used. The external interface was built on the basis of the Streamlit library, which allows you to deploy web applications without any additional resources. Code fragments are provided and implementation mechanisms are explained. A description of the developed tool is provided: a detailed explanation of the functionality of the tool, user interface and examples of use. The importance of data preprocessing, selection of an appropriate method, and final remarks on the usefulness of interactive data processing tools are discussed.
Keywords: data processing, statistics, information systems, Python web systems.
For the design of automated sorting stations of solid municipal waste it is necessary to develop algorithms and devices that allow to determine the fractions of municipal solid waste (MSW) with the necessary detail. Currently, sorting stations have been created that allow to determine the basic morphological components of MSW, but the problem of in-depth detailing needs to be elaborated. The aim of the work is to develop an algorithm for the extraction of MSW fractions with the possibility of regulating the component composition of waste. A methodology for synthesizing a device for determining waste fractions is shown. It is proposed to use a finite sequence automaton as a sorting algorithm. The synthesis of logical equations on the basis of Moore's automaton is shown. Simulation of the device operation is carried out with the help of MULTISIM program. In the presence of certain sensors it is possible to realize this technique in practice. The results can be useful for the design of sorting stations of MSW . The results of the experiment demonstrated that with the help of sequence automaton synthesis technique it is possible to develop an analyzer for determining the refined waste fractions. For implementation in practice, it is necessary to have certain analyzers for determining the components of MSW, which can contribute to more detailed sorting of MSW in the design of sorting stations.
Keywords: solid municipal waste, MSW, sorting, sequence automaton, Moore's automaton
This article is devoted to the development of a collision detection technique using a polygonal mesh and neural networks. Collisions are an important aspect of realistically simulating physical interactions. Traditional collision detection methods have certain limitations related to computational accuracy and computational complexity. A new approach based on the use of neural networks for collision detection with polygonal meshes is proposed. Neural networks have shown excellent results in various computer vision and image processing tasks, and in this context they can be effectively applied to polygon pattern analysis and collision detection. The main idea of the technique is to train a neural network on a large data set containing information about the geometry of objects and their movement for automatic collision detection. To train the network, it is necessary to create a special module responsible for storing and preparing the dataset. This module will provide collection, structuring and storage of data about polygonal models, their movements and collisions. The work includes the development and testing of a neural network training algorithm on the created dataset, as well as assessing the quality of network predictions in a controlled environment with various collision conditions.
Keywords: modeling, collision detection techniques using polygonal meshes and neural networks, dataset, assessing the quality of network predictions
The article discusses the concept of software implementation of complex tools on the platform "1C:Enterprise" for automating the accounting of the activities of shelters for homeless animals. The architecture of the solution is described, highlighting aspects of the functioning of the system’s integration modules with the social network “VKontakte” and the Telegram messenger. Diagrams of the sequence and activity of processes regarding the interaction of citizens with the key functionality of the system are presented.
Keywords: animal shelter, homeless animals, 1C:Enterprise, automation, activity accounting, animals, software package, information system, Telegram bot, integration with VKontakte, pet search
Orthogonal Frequency Division Multiplexing –OFDM) multiplexing technology is quite promising in wireless communication systems. Simultaneous use of multiple subcarriers allows for a relatively high information transfer rate. The use of mathematical models of discrete wavelet transformations instead of the fast Fourier transform (hereinafter FFT), allows you to increase the speed of signal processing by using modular codes of residue classes (hereinafter MKV). At the same time, these codes can be used to increase the noise immunity of systems with OFDM. It is known that block turbo codes (hereinafter referred to as TC) are widely used to combat packets of errors that occur when transmitting signals over a communication channel. The article presents a developed method for constructing modular turbocodes based on a system of residual classes (hereinafter MTKSOC). Obviously, the use of MTCS entails changes in the structure of the system with OFDM. Therefore, the development of a method for constructing a modular turbo code of SOC and a structural model of an interference-resistant system with OFDM using MTXOC is an urgent task. The purpose of the article is to increase the level of noise immunity of systems with OFDM, using wavelet transformations implemented in MKV instead of FFT, through the use of modular turbo code SOC.
Keywords: modular codes of residue classes, residual class system, modular turbo code of residual class system, error correction algorithm, structural model, multiplexing, orthogonal frequency division of channels
Currently, Internet of Things technologies are actively used in manufacturing enterprises for remote monitoring and preventive control of technological processes. The article is devoted to the development of an original mathematical model of the process of transmitting information packets and confirmations in the Industrial Internet of Things system, the use of which allows us to assess the probability of duplication of messages sent to the production process control center. To develop the model, the mathematical apparatus of probabilistic graphs was used, which makes it possible to take into account all possible states of the simulated process and the probabilities of transitions from one state to another. The results of computational experiments showed that the use of the developed model makes it possible to justify the choice of the maximum number of retransmissions, in which the probability of message duplication does not exceed the specified permissible values at the current level of bit errors.
Keywords: industrial Internet of things, telemetry data, production process control, message duplication, retransmissions, bit error rate, sensor devices, server, probabilistic graph
The article analyzes the impact of transformation types on the learning quality of neural network classification models, and also suggests a new approach to expanding image sets using reinforcement learning.
Keywords: neural network model, training dataset, data set expansion, image transformation, recognition accuracy, reinforcement learning, image vector
Detecting aggressive and abnormal driver behavior, which depends on a multitude of external and internal factors, is critically important for enhancing road safety. This article provides a comprehensive review of machine learning methods applied for driver behavior classification. An extensive analysis is conducted to assess the pros and cons of existing machine learning algorithms. Various approaches to problem formulation and solution are discussed, including supervised and unsupervised learning techniques. Furthermore, the review examines the diverse range of data sources utilized in driver behavior classification and the corresponding technical tools employed for data collection and processing. Special emphasis is placed on the analysis of Microelectromechanical Systems sensors and their significant contribution to the accuracy and effectiveness of driver behavior classification models. By synthesizing existing research, this review not only presents the current state of the field but also identifies potential directions for future research, aiming to advance the development of more robust and accurate driver behavior classification systems.
Keywords: machine learning, driver classification, driver behavior, data source, microelectromechanical system, driver monitoring, driving style, behavior analysis
This article explores methods for improving the reliability of telecommunication systems in Turkmenistan. The authors consider modern approaches to ensuring the stability and reliability of communication networks in the context of a rapidly changing technological environment. The article analyzes the main challenges faced by telecom operators in the country and proposes effective strategies to ensure the smooth operation of telecommunication systems. The results of the study allow us to identify key measures to improve the reliability of the communication infrastructure in Turkmenistan and ways to optimize user service processes.
Keywords: communication infrastructure, trends, prospects, system reliability, mobile communications, evolution, 2G, 3G, 4G, network reliability
This paper investigates the effectiveness of the distance fields method for building 3D graphics in comparison with the traditional polygonal approach. The main attention is paid to the use of analytical representation of models, which allows to determine the shortest distance to the objects of the scene and provides high speed even on weak hardware. Comparative analysis is made on the possibility of wide model detailing, applicability of different lighting sources, reflection mapping and model transformation. Conclusions are drawn about the promising potential of the distance field method for 3D graphics, especially in real-time rendering systems. It is also emphasized that further research and development in this area is relevant. Within the framework of this work, a universal software implementation of the distance fields method was realized.
Keywords: computer graphics, rendering, 3D graphics, ray marching, polygonal graphics, 3D graphics development, modeling, 3D models
The condition of a vehicle sensor system is an effective indicator used by many other vehicle systems. This article is devoted to the problem of choosing a forecasting method for vehicle sensors. Sensor data are considered as multivariate time series. The aim of the study is to determine the best forecasting model for the type of data under consideration. The LSTM neural network-based method and the VARMA statistical method were chosen for the analysis. These methods are preferred because of their ability to process multivariate series with complex relationships, their flexibility, which allows them to be used for series of varying lengths in a wide variety of scenarios, and the high accuracy of their results in numerous applications. The data and plots of computational experiments are provided, enabling the determination of the preferred option for both single-step and multistep forecasting of multivariate time series, based on the values of error metrics and adaptability to rapid changes in data values.
Keywords: forecasting methods, forecast evaluation, LSTM, VARMA, time series, vehicle sensors system
The article describes the methodology for developing a client-server application intended for constructing a virtual museum. The creation of the server part of the application with the functions of processing and executing requests from the client part, as well as the creation of a database and interaction with it, is discussed in detail. The client part is developed using the Angular framework and the TypeScript language; the three-dimensional implementation is based on the three.js library, which is an add-on to WebGL technology. The server part is developed on the ASP.NET Core platform in C#. The database schema is based on a Code-First approach using Entity Framework Core. Microsoft SQL Server is used as the database management system.
Keywords: client-server application, virtual tour designer, virtual museum, three.js library, framework, Angular, ASP.NET Core, Entity Framework Core, Code-First, WebGL
This article identifies the main advantages and disadvantages of using VR simulators to improve the professionalism of employees when performing work at an enterprise (organization). An analysis of existing projects used in various industries was carried out. A description of the developed first aid project is presented. The developed simulator allows you to practice skills in eliminating bleeding in different parts of the body: arm, leg, neck. While working on the project, the main factors influencing the quality of the developed VR simulator were identified. Thus, it was found that VR simulators are not capable of fully simulating fine motor skills of the hands. In addition, the simulator has restrictions on the position of the body in space. Despite the identified shortcomings, the use of the simulator allows you to practice key skills in providing first aid.
Keywords: virtual reality, VR simulator, personnel training, professional activity, first aid, information technology, modeling.
Model multiparameter distributions used in science and technology are analyzed and systematized. Particular attention is paid to the Rozin-Rammler-Weibull-Gnedenko and Kolmogorov-Gauss distributions, which adequately describe single and multiple crushing. The suitability of these distributions for modeling the granulometric composition of industrial waste from mechanical processing is confirmed by physical and computer experiments.
Keywords: distribution function, mathematical model, generalized hyperbolic distributions, crushing, bulk medium, mechanical processing
Samples of metal dust generated during milling of gray cast iron were collected experimentally. The machine operating mode, dust collection points, and blowing conditions were varied in the experiments. To ensure the reliability of the result, the physical stage of the analysis of the dimensional characteristics of the dust was performed using two methods: sieving and direct optical measurements. Significant discrepancies in the statistical parameters obtained by different methods were revealed. A hypothesis explaining the differences was proposed and confirmed. An integrated approach to the physical stage of dispersion analysis of bulk media is recommended.
Keywords: wood dust, parametric identification, sieve analysis, laser diffraction, micrographs, mathematical modeling, digital twin
This article presents a study on the approach to the development of a medical decision support system (DSS) for the selection of formulas for calculating the optical strength of intraocular lenses (IOLs) used in the surgical treatment of cataracts. The system is based on the methods of building recommendation systems, which allows you to automate the process of choosing an IOL and minimize the risk of human error. The implementation of the system in the practice of medical organizations is expected to be highly accurate and efficient, significantly reduce the time allowed for decision-making, as well as improve the results of surgical interventions.
Keywords: intraocular lens, ophthalmology, formulas for calculating optical strength, web application, machine learning, eye parameters, prognostic model, recommendation system, prediction accuracy, medical decision
Modern simulation model design involves a wide range of specialists from various fields. Additional resources are also required for the development and debugging of software code. This study is aimed at demonstrating the capabilities of large language models (LLM) applied at all stages of creating and using simulation models, starting from the formalization of dynamic systems models, and assessing the contribution of these technologies to speeding up the creation of simulation models and reducing their complexity.The model development methodology includes stages of formalization, verification, and the creation of a mathematical model based on dialogues with LLMs. Experiments were conducted using the example of creating a multi-agent community of robots using hybrid automata. The results of the experiments showed that the model created with the help of LLMs demonstrates identical outcomes compared to the model developed in a specialized simulation environment. Based on the analysis of the experimental results, it can be concluded that there is significant potential for the use of LLMs to accelerate and simplify the process of creating complex simulation models.
Keywords: Simulation modeling, large language model, neural network, GPT-4, simulation environment, mathematical model
The situation of occurrence, identification and management of risks arising during the construction process is analyzed. Uncertainty of decision-making in construction projects involves the creation of methods that ensure the reliability of decisions and their effectiveness. Such a method was developed in the Russian Project Management Association. The paper provides an example of using this method on a real construction site. An analysis of risks arising during the implementation of a construction project was conducted, a risk map was created for this project and the PERT method was applied when creating a calendar plan.
Keywords: uncertainty, risk event, probability, risk, damage, danger, reliability, risk analysis, investment and construction project, PERT method
The growing popularity of large language models in various fields of scientific and industrial activity leads to the emergence of solutions using these technologies for completely different tasks. This article suggests using the BERT, GPT, and GPT-2 language models to detect malicious code. The neural network model, previously trained on natural texts, is further trained on a preprocessed dataset containing program files with malicious and harmless code. The preprocessing of the dataset consists in the fact that program files in the form of machine instructions are translated into a textual description in a formalized language. The model trained in this way is used for the task of classifying software based on the indication of the content of malicious code in it. The article provides information about the conducted experiment on the use of the proposed model. The quality of this approach is evaluated in comparison with existing antivirus technologies. Ways to improve the characteristics of the model are also suggested.
Keywords: antivirus, neural network, language models, malicious code, machine learning, model training, fine tuning, BERT, GPT, GPT-2
The paper proposes a two-stage method of training a robot based on demonstrations, combining a diffusion generative model and online additional training using the method of Proximal Policy Optimization. In the offline phase, the diffusion model uses a limited set of expert demonstrations and generates synthetic "pseudo-demonstrations", allowing to expand the variability and coverage of the original dataset. This eliminates the narrow specialization of the strategy and increases its ability to generalize. In the online phase, a robot with a pre-trained strategy adjusts its actions in a real environment (or in a high-precision simulation), which significantly reduces the risks of unsafe actions and reduces the number of necessary interactions. Additionally, parametrically efficient pre-tuning has been introduced, reducing computational costs for online learning, as well as value guidance that focuses the generation of new data on areas of states and actions with high Q scores. Experiments on tasks from the D4RL set (Hopper, Walker2d, HalfCheetah) show that our approach achieves the greatest accumulated reward with lower computational costs compared to alternatives. T-SNE analysis indicates a shift of synthetic data in the area of space with high Q scores, contributing to accelerated learning. The results obtained confirm the prospects of the proposed method for robotic applications, where it is important to combine the limited volume of demonstrations, the safety and effectiveness of the online phase.
Keywords: robot learning from demonstrations, diffusion generative models, reinforcement learning, Proximal Policy Optimization
The article discusses the application of neural network autoencoder in the problem of monochrome image colorization. The description of the network architecture, the applied training method and the method of preparing training and validation data is given. A dataset consisting of 540 natural landscape images with a resolution of 256 by 256 pixels was used for training. The results of comparing the quality of the outputs of the obtained model were evaluated and the average coefficients of metrics as well as the mean squared error of the VGG model outputs are presented.
Keywords: neural networks, machine learning, autoencoder, image quality analysis, colorization, CIELAB
In the article, the authors propose a methodology for managing connections in a community based on the developed heuristic algorithm for optimal seating of participants in a multi-round networking event to maximize the likelihood of new partnerships within offline events. The seating algorithm is based on solving the NP-complete problem of the maximum clique. Optimization of the resulting solution is implemented based on the permutation crossover algorithm.
Keywords: In the article, the authors propose a methodology for managing connections in a community based on the developed heuristic algorithm for optimal seating of participants in a multi-round networking event to maximize the likelihood of new partnerships withi
The article is devoted to the application of large language models (BMS) in information tasks of decision support systems using the example of healthcare. The key BAYAM architectures and their practical implementations are considered, as well as the capabilities of these models for natural language processing and medical data analysis. Special attention is paid to the role of BAM in automating decision-making processes, including optimizing access to knowledge from clinical recommendations. Examples of the use of BYAM in various fields of medicine are presented. In addition, the prospects for further development of BYAM in healthcare and related challenges are discussed.
Keywords: big language models, natural language processing, decision support systems (DSS), industrial engineering, clinical guidelines, international classification of diseases