×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Application of the Residue Number System in Text Information Processing

    The article explores the application of the residue number system in text information processing. The residue number system, based on the principles of modular arithmetic, represents numbers as sets of residues relative to pairwise coprime moduli. This approach enables parallel computation, potential data compression, and increased noise immunity. The study addresses issues such as character encoding, parallel information processing, error detection and correction, computational advantages in implementing polynomial hash functions, as well as practical limitations of the residue number system.

    Keywords: residue number system, modular arithmetic, text processing, parallel computing, data compression, noise immunity, Chinese remainder theorem, polynomial hashing, error correction, computational linguistics

  • Development of an environmental monitoring portal

    The article focuses on the development of a web portal for monitoring and forecasting atmospheric air quality in the Khabarovsk Territory. The study analyzes existing solutions in the field of environmental monitoring, identifying their key shortcomings, such as the lack of real-time data, limited functionality, and outdated interfaces. The authors propose a modern solution based on the Python/Django and PostgreSQL technology stack, which enables the collection, processing, and visualization of air quality sensor data. Special attention is given to the implementation of harmful gas concentration forecasting using a recurrent neural network, as well as the creation of an intuitive user interface with an interactive map based on OpenStreetMap. The article provides a detailed description of the system architecture, including the backend, database, and frontend implementation, along with the methods used to ensure performance and security. The result of this work is a functional web portal that provides up-to-date information on atmospheric air conditions, forecast data, and user-friendly visualization tools. The developed solution demonstrates high efficiency and can be scaled for use in other regions.

    Keywords: environmental monitoring, air quality, web portal, forecasting, Django, Python, PostgreSQL, neural networks, OpenStreetMap

  • Physics-Informed Neural Network Based on Transformer Architecture for Time Series Forecasting in Engineering Systems

    The study addresses the problem of short-term forecasting of ice temperature in engineering systems with high sensitivity to thermal loads. A transformer-based architecture is proposed, enhanced with a physics-informed loss function derived from the heat balance equation. This approach accounts for the inertial properties of the system and aligns the predicted temperature dynamics with the supplied power and external conditions. The model is tested on data from an ice rink, sampled at one-minute intervals. A comparative analysis is conducted against baseline architectures including LSTM, GRU, and Transformer using MSE, MAE, and MAPE metrics. The results demonstrate a significant improvement in accuracy during transitional regimes, as well as robustness to sharp temperature fluctuations—particularly following ice resurfacing. The proposed method can be integrated into intelligent control loops for engineering systems, providing not only high predictive accuracy but also physical interpretability. The study confirms the effectiveness of incorporating physical knowledge into neural forecasting models.

    Keywords: short-term forecasting, time series analysis, transformer architecture, machine learning, physics-informed modeling, predictive control

  • Application of modern language models for automatic transcription and analysis of audio recordings of telephone conversations between sales department employees and clients

    The article is devoted to the study of the possibilities of automatic transcription and analysis of audio recordings of telephone conversations of sales department employees with clients. The relevance of the study is associated with the growth of the volume of voice data and the need for their rapid processing in organizations whose activities are closely related to the sale of their products or services to clients. Automatic processing of audio recordings will allow checking the quality of work of call center employees, identifying violations in the scripts of conversations with clients. The proposed software solution is based on the use of the Whisper model for speech recognition, the pyannote.audio library for speaker diarization, and the RapidFuzz library for organizing fuzzy search when analyzing strings. In the course of an experimental study conducted on the basis of the developed software solution, it was confirmed that the use of modern language models and algorithms allows achieving a high degree of automation of audio recordings processing and can be used as a preliminary control tool without the participation of a specialist. The results confirm the practical applicability of the approach used by the authors for solving quality control problems in sales departments or call centers.

    Keywords: call center, audio file, speech recognition, transcription, speaker diarization, replica classification, audio recording processing, Whisper, pyannote.audio, RapidFuzz

  • Semantic integration and data adaptation in heterogeneous corporate information systems

    The article addresses the issues of integration and processing heterogeneous data within a single company as well as during interaction between various participants of business processes under conditions of digital transformation. Special attention is given to collaboration between equipment manufacturers and industrial enterprises, emphasizing the importance of aligning and transforming data when interacting with heterogeneous information systems. The problem of integrating historical data, challenges arising from transitioning to new infrastructure, and a solution based on principles similar to those used by open standards such as OpenCL are discussed. Particular emphasis is placed on providing complete and consistent datasets, developing effective mechanisms for semantic integration, and using ontological approaches to address difficulties in comparing and interpreting diverse data formats. It highlights the necessity of continuously updating metadata dictionaries and establishing connections between different data sources to ensure high-quality and reliable integration. The proposed methods aim at creating sustainable mechanisms for exchanging information among multiple business entities for making informed management decisions.

    Keywords: digital transformation, heterogeneous systems, erp/mes systems, ontology, semantic integration, metadata, data mapping

  • Calculation of the coefficient of heterogeneity of a mixture when mixing bulk media, the particles of which have different sizes and shapes

    The article discusses the structure and principle of operation of an improved centrifugal unit for mixing bulk materials. A special feature of which is the ability to control mixing modes. Due to its design, the selection of a rational position of the bump makes it possible to provide such conditions for the impact interaction of particle flows, in which a high-quality homogeneous mixture of components is formed, the particles of which have different sizes, shapes and other parameters. To characterize the resulting mixture, the coefficient of heterogeneity was used, the conclusion of which is based on a probabilistic approach. A computational scheme of the rarefied flow formation process is given. An expression is derived for calculating the coefficient of heterogeneity when mixing bulk media, the particles of which have different sizes, shapes and other parameters. The research conducted in the article allows not only to predict the quality of the resulting mixture, but also to identify the factors that have the greatest impact on achieving the required uniformity.

    Keywords: aggregate, bulk media, mixing, coefficient of heterogeneity, concentration, design scheme, particle size

  • Efficiency of using long-span structures in industrial and civil construction

    The article discusses some methods for the construction of long-span coverings from precast reinforced concrete elements and prefabricated steel structures. To systematize these design and technological solutions and determine the effectiveness of their application based on the parameters of manufacturability, a comparative analysis was carried out. The construction technologies were compared according to the following parameters: specific and total labor intensity, the level of mechanization, the total number of elements, the average and maximum mass of one element, the total mass of the mounted elements, and the equilibrium coefficient. The analysis showed that for reinforced concrete structures, installation in blocks is most effective, involving preliminary enlargement at ground level, followed by lifting and installation in the design position. Precast reinforced concrete shells have a higher level of mechanization and a degree of equilibrium, which makes it possible to use crane equipment efficiently, but due to their considerable weight, they require the use of supporting structures and high-load cranes. The installation of prefabricated steel structures in its entirety with preliminary enlargement at ground level is the least laborious, but the need to install a large number of low-mass piece elements reduces manufacturability.

    Keywords: installation of long-span structures, installation of triple-layer rotational shells of double curvature, installation of steel beam structures, installation of a spatial structural roof unit, installation of the entire roof structure as a single unit

  • Development of a software module for automatic code generation based on UML diagrams

    The article discusses a software module developed by the authors for automatic generation of program code based on UML diagrams. The relevance of developing this module is due to the limitations of existing foreign code generation tools related to functionality, ease of use, support for modern technologies, as well as their unavailability in Russian Federation. The module analyzes JSON files obtained by exporting UML diagrams from the draw.io online service and converts them into code in a selected programming language (Python, C++, Java) or DDL scripts for DBMS (PostgreSQL, Oracle, MySQL). The Python language and the Jinja2 template engine were used as the main development tools. The operation of the software module is demonstrated using the example of a small project "Library Management System". During the study, a series of tests were conducted on automatic code generation based on the architectures of software information systems developed by students of the Software Engineering bachelor's degree program in the discipline "Design and Architecture of Software Systems". The test results showed that the code generated using the developed module fully complies with the original UML diagrams, including the structure of classes, relationships between them, as well as the configuration of the database and infrastructure (Docker Compose). The practical significance of the investigation is that the proposed concept of generating program code based on visual models of UML diagrams built in the popular online editor draw.io significantly simplifies the development of software information systems, and can be used for educational purposes.

    Keywords: code generation, automation, python, jinja2, uml diagram, json, template engine, parsing, class diagram, database, deployment diagram

  • Deploying and Integrating Grafana, Loki, and Alloy in a Kubernetes Environment

    This article presents a structured approach to deploying and integrating Grafana, Loki, and Alloy in Kubernetes environments. The work was performed using a cluster managed via Kubespray. The architecture is focused on ensuring external availability, high fault tolerance, and universality of use.

    Keywords: monitoring, ocestration, containerization, Grafana, Loki, Kubernetes, Alloy

  • The influence of the functional purpose of buildings on the formation of historical architecture in Europe

    Currently, one of the main factors influencing the formation of architecture is the functional purpose of the building, since it determines the essence of the architectural object. The purpose of the scientific work is to study the influence of building functions on the historical architecture of Europe and their impact on the development of modern architecture. This article sets the objectives of studying the classification of functional purposes of buildings, conducting a retrospective analysis of the development and formation of architectural styles in Europe, based on world design experience and the conducted research to identify the influence of the building function on its planning and volumetric-spatial solutions in the process of architecture development. The research method is the analysis of the historical architecture of Europe from the time of the inception of architecture to the present day, carried out on the basis of world design experience in different eras. In the course of the study, four main trends in the development of the functions of modern architecture were identified: integration with nature, creation of adaptive spaces, multifunctionality and development of new functions. It is concluded that the building function played the most important role throughout the entire period of architecture formation, which led to the emergence of a huge variety of building types today and made a significant contribution to the development of architecture of the XXI century.

    Keywords: architecture, historical architecture, architectural style, functional purpose, European architecture, building type, retrospective analysis, function, influence, development

  • A method for evaluating programmable logic controllers that takes into account production needs

    Choosing a programmable logic controller is one of the most important tasks when designing an automated system. The modern market offers many options, different in characteristics, which have different priorities for production. The paper proposes a method for evaluating the overall effectiveness of software logic controllers. When evaluating the selected characteristics, linear scaling and weight coefficients are introduced that take into account the importance of the parameter for the controller in question compared to others. The weight of the parameter in the calculation is set using a coefficient. The values of the weight coefficients may vary depending on the requirements of the technological process.

    Keywords: programmable logic controller, efficiency evaluation method, weight ratio, petal diagram

  • Comparison of MCTS, MCDDQ, MCDDQ-SA, Greedy algorithms in the context of the problem of parallel planning of machine loading in production

    This paper considers the problem of task scheduling in manufacturing systems with multiple machines operating in parallel. Four approaches to solving this problem are proposed: pure Monte Carlo Tree Search (MCTS), a hybrid MCDDQ agent combining reinforcement learning based on Double Deep Q-Network (DDQN) and Monte Carlo Tree Search (MCTS), an improved MCDDQ-SA agent integrating the Simulated Annealing (SA) algorithm to improve the quality of solutions, and a greedy algorithm (Greedy). A model of the environment is developed that takes into account machine speeds and task durations. A comparative study of the effectiveness of methods based on the makespan (maximum completion time) and idle time metrics is conducted. The results demonstrate that MCDDQ-SA provides the best balance between scheduling quality and computational efficiency due to adaptive exploration of the solution space. Analytical tools for evaluating the dynamics of the algorithms are presented, which emphasizes their applicability to real manufacturing systems. The paper offers new perspectives for the application of hybrid methods in resource management problems.

    Keywords: machine learning, Q-learning, deep neural networks, MCTS, DDQN, simulated annealing, scheduling, greedy algorithm

  • Demand forecasting and inventory management using machine learning

    This article is devoted to the study of the possibilities of machine learning technology for forecasting the demand for goods. The study analyzes various models and the possibilities of their application as part of the task of predicting future sales. The greatest attention is focused on modern methods of time series analysis, in particular neural network and statistical approaches. The results obtained during the study clearly demonstrate the advantages and disadvantages of different models, the degree of influence of their parameters on the accuracy of the forecast within the framework of the demand forecasting task. The practical significance of the findings is determined by the possibility of using the results obtained in the analysis of a similar data set. The relevance of the study is due to the need for accurate forecasting of demand for goods to optimize inventory and reduce costs. The use of modern machine learning methods makes it possible to increase the accuracy of predictions, which is especially important in an unstable market and changing consumer demand.

    Keywords: machine learning algorithms, demand estimation, forecasting accuracy, time sequence analysis, sales volume prediction, Python, autoregressive integrated moving average, random forest, gradient boosting, neural networks, long-term short-term memory

  • Content-based approach in recommender systems: principles, methods and performance metrics

    This paper explores the content-based filtering approach in modern recommender systems, focusing on its key principles, implementation methods, and evaluation metrics. The study highlights the advantages of content-based systems in scenarios that require deep object analysis and user preference modeling, especially when there is a lack of data for collaborative filtering.

    Keywords: сontent - oriented filtering, recommendation systems, feature extraction, similarity metrics, personalization

  • Simulation of the operation of wood Using ANSYS to assess its strength in real structures

    The article presents the results of comparing numerical modeling of wooden structures with laboratory and full-scale tests. In the course of the work, numerical models of the material were created in the Ansys Workbench software package from volumetric finite elements with a variant set of physico-mechanical parameters simulating the behavior of real wood. The simulation parameters were based on the laboratory testing results of a solid wood beam. The simulation results were compared with the full-scale test results of a composite wood slab. Modeling of constructions was carried out in the form of linear, bilinear and multilinear models.

    Keywords: solid wood beam, composite wood slab, bilinear finite element model, multilinear finite element model, stress-strain state