Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-18T06:23:19.590Z Has data issue: false hasContentIssue false

Intelligent Power Grid Video Surveillance Technology Based on Efficient Compression Algorithm Using Robust Particle Swarm Optimization

Published online by Cambridge University Press:  01 January 2024

Hongyang He*
Affiliation:
Chongqing Fuling Electric Power Industry Co., Ltd, Chongqing, China
Yue Gao
Affiliation:
Chongqing Fuling Electric Power Industry Co., Ltd, Chongqing, China
Yong Zheng
Affiliation:
Chongqing Fuling Electric Power Industry Co., Ltd, Chongqing, China
Yining Liu
Affiliation:
Chongqing Fuling Electric Power Industry Co., Ltd, Chongqing, China
*
Correspondence should be addressed to Hongyang He; 0000003@yzpc.edu.cn

Abstract

Companies that produce energy transmit it to any or all households via a power grid, which is a regulated power transmission hub that acts as a middleman. When a power grid fails, the whole area it serves is blacked out. To ensure smooth and effective functioning, a power grid monitoring system is required. Computer vision is among the most commonly utilized and active research applications in the world of video surveillance. Though a lot has been accomplished in the field of power grid surveillance, a more effective compression method is still required for large quantities of grid surveillance video data to be archived compactly and sent efficiently. Video compression has become increasingly essential with the advent of contemporary video processing algorithms. An algorithm’s efficacy in a power grid monitoring system depends on the rate at which video data is sent. A novel compression technique for video inputs from power grid monitoring equipment is described in this study. Due to a lack of redundancy in visual input, traditional techniques are unable to fulfill the current demand standards for modern technology. As a result, the volume of data that needs to be saved and handled in live time grows. Encoding frames and decreasing duplication in surveillance video using texture information similarity, the proposed technique overcomes the aforementioned problems by Robust Particle Swarm Optimization (RPSO) based run-length coding approach. Our solution surpasses other current and relevant existing algorithms based on experimental findings and assessments of different surveillance video sequences utilizing varied parameters. A massive collection of surveillance films was compressed at a 50% higher rate using the suggested approach than with existing methods.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © 2021 Hongyang He et al.

1. Introduction

As discussed by Memos et al. [Reference Memos, Psannis, Ishibashi, Kim and Gupta1], the number of Switch-Mode Power Supply is increasing, as are incentive-based switching activities at the end-user level. A high-resolution time-resolution monitoring system will be required for future smart grids’ operational stability to properly examine the state of the electricity grid. When it comes to power grid measurement applications, kilohertz frequencies are used, but the degree of aggregation and the reporting rate is not the same. Instead of using a second rate for instantaneous data like the smart meters, they utilize a day or more rate for the cumulative consumption data. As a result of the consolidation, communication lines and storage space needs have been significantly reduced. Assessing power quality (PQ) as well as disaggregating loads necessitates more data, as Gao et al. have shown [Reference Gao and Wang2]. Several features can be added on top of Harmonics; however, they can only provide partial information. Changes in grid operating approaches, demand-side control, and the rise of decentralized generation have led to an unknown number of combinations of interruptions. Features-based approaches may be unreliable due to the fact that data gets destroyed, particularly when exciting short-lived occurrences. Some thresholds may be adjusted by the user in commercially available equipment for PQ measurement at sample rates ranging from 10 kHz to 100 megahertz; raw data is captured when an event happens. A future smart grid, on the other hand, will have hard-to-predict threshold values. There may be further insights to be gained by examining raw data from synchronized measurements at different locations—even if not all scattered sensors were able to classify events simultaneously and hence did not capture them at a high resolution was depicted by Tsakanikas et al. [Reference Tsakanikas and Dagiuklas3]. Deploying a continuous storage system for raw data will assist data-driven research that attempts to improve event classification and smart grid analysis algorithms; for example, when using lossless data compression, compressing and transmitting large volumes of data is considerably easier. Uncommonly, the raw data stream of a recording device has three voltage readings (from the 3 phases) and 4 current measurements (3 for the phase currents and 1 for the neutral conductor). Nominally sinusoidal voltage curves that are 120° out of phase make up a three-phase power system. A strong relationship has been established between the various modes of communication. The same applies to current lines, and leveraging this interaction allows for a particular minimization in data volume. As a result of their unique distortion, the waveforms are less connected. Conceived as a way to decrease correlation in current channels owing to phase-load distortions. It is important to note that waveforms only change at the equipment’s contact state or load alterations in terms of distortions. In general, these operations are slow compared to the length of time. With only one load connection, waveform changes are rare. Because load currents are increased whenever a massive number of demands get linked to the sensing field of the grid, variations exist rapidly. Waveform compression is therefore conceivable. It is known that lossless compression methods exist for certain applications, such as music and video. However, no method has been identified that is specifically designed to take advantage of the periodicity and multichannel nature of electrical signals encountered in a stream compression methodology. An overview of lossy and nonlossy techniques is included in the book, as are the CR values from trials. Applications that focus on PQ-event compression are listed; these implementations were developed by Shidik and his colleagues [Reference Shidik, Noersasongko, Nugraha, Andono, Jumanto and Kusuma4] among themselves. There is no statistical analysis of lengthy original data. These models focused on extremely precise incident data to validate the applicability of algorithmic changes in their own unique contexts. In the majority of cases, data sources are not referenced or provided at all. We find the researchers do not have a benchmark against which to assess the feasibility of compression algorithms for grid wave information, regardless of whether they are using known techniques or new ones that have yet to be found. We have chosen to focus on the development of compression with no degradation techniques and performances for grid data at a lot of sampling to address these issues in the present contribution. When utilizing input data with a variety of ideas, we are considering new growth ideas made of natural time-series analysis. New lossless compression algorithms could be developed by using testing data and comparison parameters for the first thorough accessible standard. They can be used as a decision assistance tool by researchers dealing with data-intensive smart grid measures. The preprocessing phase entails changing the color space, after which the features may be retrieved using pseudo-component analysis. Then, utilizing Robust Particle Swarm Optimization, the encoding and decoding process may be completed. The main contribution of the research work is as follows:

  1. (i) To design and develop a compression-based video surveillance technology based on the optimization approach

  2. (ii) For the purpose of the authentication, run-length encoding and decoding were performed

The following is how the rest of the article is organized. In Section 2, a literature survey is being reported on strategies to reduce loss during video compression. The issue of lossless video compression mechanisms was then addressed in Section 3. Section 4 then poses the proposed mechanism over lossless video compression. The results of the suggested method and the conclusions were examined in Sections 5 and Section 6.

2. Related Works

In [Reference Memos, Psannis, Ishibashi, Kim and Gupta1], the article looks into wireless sensor networks (WSNs) alongside the most recent research on social confidentiality and protection in WSNs. While adopting High-Efficiency Video Coding (HEVC) as a new media compression standard, a novel EAMSuS in the IoT organization is presented (HEVC). In [Reference Hampapur, Brown and Connell5], complete situational awareness is provided via real-time video analysis and active cameras. In [Reference Duan, Liu, Yang, Huang and Gao6], a new section of MPEG standards called Video Compression Modulation (VCM) has been suggested by the author. Video Coding for Machine Vision seeks to bridge the gap between machine vision feature coding and human vision video coding to accomplish collaborative compression and intelligent analytics. VCM’s definition, formulation, and paradigm are provided first, corresponding with Digital Retina’s rising compress instance. This is why they analyze video compression and features from the unique perspective of MPEG standards, which offers both academics and industry proof to accomplish the collaborative compression of the video shortly. In [Reference Yoon, Jung, Park, Lee, Yun and Lee7], using MapReduce, the author has developed UTOPIA Smart Video Surveillance for smart cities. From their end, we were able to incorporate smart video surveillance into our middleware platform. With the help of this article, we show that the system is scalable, efficient, dependable, and flexible. In [Reference Rajavel, Ravichandran, Harimoorthy, Nagappan and Gobichettipalayam8], here when it comes to edge computing capabilities, the cloud object tracking and behavior identification system (CORBIS) was demonstrated. To increase distributed video surveillance systems’ resiliency and intelligence, network bandwidth and reaction time between wireless cameras and cloud servers are being reduced in the Internet-of-things (IoT). In [Reference Hamza, Hassan, Huang, Ke and Yan9], an effective cryptosystem is used to create a safe IoT-based surveillance system. There are three parts to it. An automated summary technique based on histogram clustering is used to extract keyframes from the surveillance footage in the first stage. To compress the data, a discrete cosine transform is applied to it (DCT). Not to mention, a discrete fractional random transform is used to develop an efficient picture encryption approach in the suggested framework (DFRT). In [Reference Prakash10], the author proposes a novel approach for compressing video inputs from surveillance systems. There is no way to reduce visual input redundancy using outdated methods that do not meet the demands of modern technologies. Video input storage needs to increase as a result, as does the time required to process the video input in real-time. To compress video inputs from surveillance systems, a unique technique is presented in this research paper. Visual input redundancy cannot be reduced using obsolete approaches that do not match the expectations of contemporary technology. This raises the storage requirements for video input and the processing time as a result. In [Reference Nandhini and Radha11], by using compressed sensing (CS), the author suggests creating security keys from the measurement matrix elements to secure your identity. Assailants cannot reconstruct the video using these. They are designed to prevent this. A WMSN testbed is used to analyze the effectiveness of the proposed security architecture in terms of memory footprint, security processing overhead, communication overhead, energy consumption, and packet loss, for example. In [Reference Jiang, Wang, Daneshmand and Wu12], a new binary exponential backoff (NBEB) technique was suggested by the author to “compress” unsent data that can preserve important information but recover the electronic trend as much as feasible. Data coming in may be temporally chosen and dumped into a buffer, while fresh data can be added to the buffer as it is received. As a result of the algorithm, the incoming traffic rate can be reduced in an exponential relationship with the transmitting failure times. In [Reference Jumar, Maaß and Hagenmeyer13], the author suggested the lossless compression technique to handle the problem of managing huge raw data amounts with their quasiperiodic nature. The best compression method for this sort of data is determined by comparing the many freely accessible algorithms and implementations in terms of compression ratio, calculation time, and operating principles as well as algorithms for audio archiving; there are other algorithms for general data compression (Lempel–Ziv–Markov chain algorithm (LZMA), Deflate, Prediction by partial matching (PPMd), Burrows–Wheeler algorithm (Bzip2), and GNU zip (Gzip)) that are put to the test against one other. Deal with the challenge of managing enormous raw data quantities with their quasiperiodic nature by using lossless compression. Compression ratio, computation time, and operating principles are all taken into account when comparing publicly available algorithms and implementations to decide which is the most efficient. Additionally, generic data compression techniques such as LZMA, Deflate, PPMd, Bzip2, and Gzip are also put to the test. In [Reference Elhannachi, Benamrane and Abdelmalik14], an efficient embedded image coder based on a reversibly discrete cosine transform is proposed for lossless. ROI coding with a high compression ratio (RDCT) was suggested. To further compress the background, a hierarchical (SPIHT) partitioning technique is used to combine the proposed rearranged structure with a lost zero tree wavelet coding. Results of the coding process indicate that the new encoder outperforms many state-of-the-art methods for still photo compression. In [Reference Sophia and Anitha15Reference Yamnenko and Levchenko17], the focus was based on the loss of video compression. Even at lower bit rates, the novel loss-compression method improves contourlet compression performance. Along with SVD, compression efficiency is improved by standardization and prediction of broken subband coefficients (BSCs) [Reference Rahimunnisha and Sudhavani18]. We measure the computational complexity of our solution with a better video quality. HCD uses DWT, DCT, and genetic optimization to improve the performance of transformed coefficients, among other techniques. This method works well with MVC to get the best possible rate distortion. The simulation results are produced using MATLAB Simulink R2015 to examine PSNR, bit rate, and calculation time for various video sequences using various wavelet functions, and the performance results are evaluated [Reference Xu, Liu, Yan, Liao and Zhang19]. To solve the optimization issue of trajectory combination while producing video synopses, a new approach has been devised. When dealing with the optimization issue of motion trajectory combination, the technique makes use of the genetic algorithm’s temporal combination methods (GA) [Reference Darwish and Almajtomi20]. The evolutionary algorithm is utilized as an activation function within the hidden layer of the neural network to construct an optimum codebook for adaptive vector quantization, which is proposed as a modified video compression model. The context-based initial codebook is generated using a background removal technique that extracts motion items from frames. Furthermore, lossless compression of important wavelet coefficients is achieved using Differential Pulse Code Modulation (DPCM), whereas lossy compression of low energy coefficients is achieved using Learning Vector Quantization (LVQ) neural networks [Reference Abduljabbar, Hamid and Alhyani21]. This paper presents a rapid text encryption method based on a genetic algorithm. It is possible to use genetic operators Crossover and Mutation to encrypt data. By splitting up the plain text characters into pairs and using a crossover operation to obtain the encrypted text from the plain text, this encryption approach uses mutations to get its encrypted message.

From the literature survey, reviewed images and videos are compressed using transform-based and fractal approaches, along with other lossless encoding algorithms, which are now the most frequently used methods for still and video compression. Each technique has its own set of pros and downsides like breaking of the wavelet signal and low compression ratio; hence, it is important to choose the right one. It is most common for video-based images to be compressed using transform-based compression (TBC). In order to achieve compression, the signal or values are altered. Using various transformations, they convert a spatial domain representation into a picture. Brushlet is an example of an adaptive transformation (Verdoja and Grangetto 2017); bandelet (Raja 2018) (Erwan et al. 2005) and directionlet (Jing, et al. 2021) give information about the picture in advance. After applying these modifications to a picture, its essential function is altered. Hence, we are motivated to develop a methodology that overcomes all the existing video compression issues.

3. Problem Statement

Rapid advances are being made in compressing technology. As a challenging and essential topic, real-time video compression has sparked a lot of studies. This corpus of information has been included in the motion video standards to a large extent. Unanswered are several significant questions. According to the point of view of a compression algorithm, eliminating various redundancies from certain types of video data is a compression challenge. Thorough knowledge of the problem is needed, as well as a novel approach to solve all of the existing research gaps with irreversible video compression. Progress in other fields, such as artificial intelligence, has contributed to the breakthroughs in compression. A compression algorithm's success depends on the acceptance of a new generation of algorithms in addition to its technological excellence.

4. Proposed Work

As a result of the smart grid’s usage of ICTs, the generation, distribution, and consumption of electricity are all more efficient (Information and Communication Technologies). For example, the transmission system and the medium-voltage level distribution system are monitored by Supervisory Control and Data Acquisition (SCADA) and wide-area monitoring systems (WAMS). It is important to remember that the primary objective of compression is to minimize the amount of data. That is if the compressed data retains most of its original content. Various scholars are currently involved in proposing effective techniques of data compression. Listed below are some of the most prevalent data compression techniques. With this analysis, we are focusing on compressing the PQ-event data in a video context in each successive frame to save space. To accomplish this, we must first identify the video frame object. Robust Particle Swarm Optimization is used to create a lossless video compression method. This is a diagram of the recommended technique shown in Figure 1.

FIGURE 1: Schematic representation of the suggested methodology.

4.1. Dataset

They used the UK Domestic Appliance-Level Electricity (UK-DALE) Dataset to conduct the experiments. A smart distribution system collects data on three-phase voltage, current, active and reactive power, and power factor from transformers at 54 substations as well as estimations of current and voltage at the inlets of three homes. The data is then analyzed and compared with the raw data from three homes. A 16 kHz sampling rate and a 24-bit vertical resolution were employed in the acquisition. There was a random selection of six FLAC-compressed recordings from 2014-8-08 to 2014-05-15, each having an hour of recordings. In a proprietary format, these data are recorded as four-byte floating-point numbers with timestamps at a sampling rate of 15 kHz. Voltage and current values are included in phase 2 of house 5. Every one of the four files contains 266 s. Large-scale databases hold all data transferred via a network. Raw data for three-phase voltages need 8.4 GB per day, whereas three-phase currents (including neutral) require 19.35 GB per day. To transmit the data, you need 0.8 Mbit/s and 1.8 Mbit/s in turn. This dataset was compiled in the following locations: as our institution’s main power supply in Karlsruhe, Germany, we also have power outlets in our practical room and a substation transformer there. A total of seven channels consisting of four currents and three voltages are sampled at 12.8 and 25 kS/sec, respectively. There are seven channels, with four currents and three voltages sampled at 12.8 and 25 kS/sec, for a total of seven channels. Single-channel testing and dual-channel testing include measuring the current and/or voltage of a single phase in both situations, depending on which method is used. To save the data, raw 16-bit integers are stored in blocks of 60 s.

Electricity generation, transmission, and distribution in smart power systems are all affected by the analysis of the data. As a result, data exchange and memory requirements are expected to grow considerably, and data storage and bandwidth requirements for communication links in smart grids are also expected to increase. It is necessary to raise the sampling frequency to receive reliable and real-time information from the intelligent grid. There will be a greater emphasis on smart grid data compression in the future. Figure 1 illustrates the proposed compression approach. In areas of the grid with significant data volume, this approach can be used successfully.

4.1.1. Preprocessing

There are several steps to video compression, the first being preprocessing. Preprocessing is essential for a database’s longevity and usefulness. For this reason, each stage in the video data processing workflow appears to be crucial. The procedure involves preprocessing, such as error detection or any other conversions that are not essential. Power grid video can cause picture frames to be split. The Bayesian motion subsampling approach may be used to create the video frame. This is a common method for removing frames from a movie. As the name implies, it is a computerized method used to enhance the frame creation process. For the most common sensitivities, the picture frame intensity range has been expanded, which results in a better image frame sensitivity value.

Let p denote the subsampled of each possible frame illustrated as

(1) p y = Number of the pixel frames with the y intensity Total number of the pixel frames .

Here, y = 0,1 , , y 1 .

The separated pixel frames can be defined as depicted in [Reference Azam, Ur Rahman and Irfan22]:

(2) H i , j = base Y 1 Y = 0 b i , j p Y ,

where base represents the nearest integer. This is equivalent to transforming the pixel intensity [Reference Xiang, Tang, Zhao and Su23]:

(3) N x 0 N p N x d z = N N x 1 N d d N .

Here, finally, the probability distributed uniformity function can be represented as N / x .

When it comes to histograms, the equalization procedure can soften and enhance them. However, even though the histogram produced by the equalization is perfectly flat, it will be softened. After reducing the pictures’ superfluous noise, we apply a threshold technique to improve the refined frame acquired from the context. Thereafter, binary images are created, which streamlines the image processing process. As a result of the color space conversion, we see a shading effect in the majority of pictures. The picture contains three channels in most cases (red, green, blue). In the blue channel, there is no more information, but there is a great deal of contrast. Preprocessed green channel was deleted next. For example, here is how to extract the green channel [Reference Kwon, Kim and Park24]:

(4) I org = f σ , μ , β , I red = f 1 , μ , β , I Green = f σ , 2 , β ,

where σ denotes the Red channel, µ denotes the Green channel, and β denotes the Blue channel.

Translation of color representation from one basis to another is called color space conversion (CSC). In most cases, this occurs while converting a picture from one color space to another. The use of a single threshold value for converting the color space is thus not recommended.

(5) θ Threshold £ j u v 1 / 3 j best j i ,

where E represents converted the color space.

The color space is transformed to grayscale by keeping the brightness information. A grayscale picture frame can be represented as a collection of grayscale images by D 2.

(6) £ D 2 = GS D 1 = d 2 1 , d 2 2 , , d 2 i , , d 2 D .

After the frame gets preprocessed, the data can undergo the step of feature extraction.

4.1.2. Feature Extraction

We implemented the pseudo-component analysis in the feature extraction module to improve the compression performance and concentrate the image’s information. The method for decreasing the size and complexity of data sets involves converting huge numbers of variables into smaller ones that retain the majority of the information contained in the large set. Naturally, limiting the number of parameters in sets of data lowers the information’s accuracy, but the trick is to give up just a little precision for convenience. It is simpler to examine and interpret smaller data sets. Machine learning algorithms can also examine data more easily and quickly without dealing with extraneous issues. Each pseudo-redundancy component must be selected as a first stage in the process of feature extraction. In this module, the main goal is to extract the highlighted characteristics. Below are the configurations of this mechanism.

(7) y input = V c T y c + B c , V s T y s + B s , β = f 2 V int T f 1 y input + B int .

Here, V c T y c + B c , V s T y s + B s Error! Bookmark not defined denotes the overall feature level; V c T C c × C int , V s T C s × C int , and V int T 2 C int × 1 represent feature weights; Bc, Bs, and B int depict the associated features; Cc and Cs correspond to the sizes of the input medium of the categorization and feature sections, accordingly; and C int denotes the internal input. Operations f 1 y = max y , 0 and f 1 y = 1 / 1 + exp y associate to sigmoid activation operation, accordingly. The attention map is further standardized to [0, 1]. The outcome of the feature extraction is represented as depicted in [Reference Tuncer, Dogan, Ertam and Subasi25]:

(8) y out = f 3 β × y c , y s .

Here, f 3 consists of a sequence of the feature components.

Pseudo- and nonpseudo-component characteristics can be selected using a property calculation technique. To determine pseudo-component characteristics, Hong correlations approaches, which employ averaging techniques, and Leibovici correlations, which use mixing principles, are used. In this approach, the phase fraction values are collected from a compositional system to minimize the difference between them. Pseudo- and nonpseudo-redundancy characteristics can be retrieved, as shown as follows [Reference Xie, Ren, Long, Yang and Tang26]:

(9) L o Pseudo , nonpseudo = 1 2 A B A + B = 1 j N p j g j + s m j N p j + j N g j + s m ,

where pj represents the pseudo features, gj represents the nonpseudo features, and sm represents the empirical constant.

4.1.3. Optimized Compression Process

Video data may be compressed without losing any information using this method. Concerning Robust Particle Swarm Optimization (RPSO) and run-length coding (RLC), the common RLC can be optimized by using the optimization algorithm and has been employed for the compression stage. We analyze the properties of compressed data using this technique. To maximize compression-related parameters, it is advised to use this method as a population-based approach. RPSO is initialized with the sample particles and modified with the optimal answer in each cycle. The resulting answer is called fitness and is referred to as the best. The best solution obtained by a particle in a population is considered the world’s top value monitored by the particle swarm optimizer (g best). By the two p best and g best solutions, the positions of each particle change to global optima. The individual speeds and location functions of each particle are as follows. In a dimension search space D, there is a swarm composed of particles where each particle is represented by ‘i’ in a vector of X i = x i 1 , x i 2 , , x id and the particle bet solution pbest is denoted as p i = p i 1 , p i 2 , , p id . Then, the best solution of the subset swarm is given by g best p g = p g 1 , p g 2 , , p g d . The i th particle velocity is represented as V i = V i 1 , V i 2 , , V id . The particle velocity and location are updated based on equations (10) and (11).

The weights updates are given by [Reference Liu, Wang, Zeng, Yuan, Alsaadi and Liu27]

(10) v id n + 1 = W i t v id n + C 1 rand p id x id n + C 2 rand p g d x id n ,

where W represents the weighed features, C represents the cross features, n represents the constant, rand represents the random number, x id represents the particle velocity, and p id represents the particle motion. Here, depending on the feature extracted, the details can be updated depending upon the weightage, where V id = V max , and it reflects the number of iterations between 1 and 10, where 10 is the maximum number of iterations. The random value of 0 to 1 is represented by the rand. C 1 and C 2 normally signify a nonnegative amount of an acceleration constant; here, C 1 and C 2 = 1.05. The particle orientation is also modified with [Reference Yong, Li-Juan, Qian and Xiao-Yan28]

(11) x id n + 1 = x id n + v id n + 1 .

Any swarm obtains the health or objective f and each iteration provides the best solution; then if f(x i ) < f(p best) and f(g best), then p best and g best = x i . The optimal measurement is obtained to maximize the curve transformation coefficients.

Video reconstruction can be done via run-length encoding once the optimized values are acquired. Using sequential data, this is a fairly simple operation to perform on your computer. For redundant data, it is a great tool. Running symbols are replaced by shorter ones in this technique. There are two ways to express the run-length code in grayscale images: V and R, where V represents the character count and R represents the run length. For optimized run-length optimal run-length encoding (ORLE), the following steps are required:

  • Step 1: Coefficient optimization

  • Step 2: Enter the string

  • Step 3: Give a unique value from the very first symbol or letter

  • Step 4: Otherwise, leave if the character or symbol is the final one in the string

  • Step 4: Additional symbols can be read and counted

  • Step 5: Until the preceding symbol subband has a nonmatching value, move on to step 3

  • Step 6: This will give you a count of the number of times a certain symbol appears in a given sentence

The suggested methodology uses a vector that contains a variety of scales to convert subbands that are optimized minimum and maximum to achieve the best result.

(12) Compressed Fitness value = 40 q 3 S v 2 exp cos 3 π S v / d b + 10 exp ,

where q denotes the compressed reconstructed value and Sv is the compressed score value that is obtained. Finally, the best rate of compression can get obtained. The RPSO reconstructs the data by using run-length decoding after refining the transforming Algorithm 1 curvelet parameters are as follows.

Algorithm 1: (RPSO)

Finally, after compression, the status of the grid can get sorted out and it can get monitored and the irregular grid distribution can get identified.

5. Performance Analysis

Increasingly, data is being exchanged across smart grid sectors. Many types of data are created every day. For example, meteorological data such as the amount of sun or wind, humidity, or temperature are essential for optimal performance in many industries. Two phases in the data interchange procedure are encoding and decoding (or decryption). Numerous operations take place during the encoding phase to prepare data for transmission; when the data is encoded and decoded, it will be returned to its original form. In this section, you will learn about the complete process of performing experiments for performance evaluation. It is written in MATLAB, which is a programming language. Measurement data was collected over 24 hours in 1-minute, 5-second, 10-second, and 20-second intervals to assess the proposed compression methods. Readings from multiple meters were collected for each period in a data matrix.

Table 1 illustrates the effect of truncating small singular values on the compression ratio (CR) and percentage residual root difference. It can be seen that the minimum root mean square distance is obtained when eight singular values are considered. This leads to a reduction in the signal length. Compared to other sets of data, the calculated CR values for the 5-second time interval data are closer to the Total compression ratios (TCR) values in Table 1. Data obtained at 1-minute, 10-second, and 20-second intervals have generated CRs that deviate somewhat from the TCRs. Figure 2 illustrates the relationship between the number of significant singular values and TCR. According to the plotted data, the size of the data matrix has an impact on the ratio of compression (r), the number of singularly significant values. There are two different sizes of a data matrix: 5 seconds and 1 minute. A greater number of significant single values were required to match the TCR in 5-, 10-, and 20-second datasets than in the 1-minute dataset, as can be observed in Figure 2. As an alternative, selecting a shorter time interval, such as five seconds, will offer a better approximation on the number of significant singular values, resulting in the computed CRs being closer to the TCRs.

TABLE 1: Computed CR, and number of singular values, r of compression.

FIGURE 2: Plot of the number of singular values versus TCR for the different datasets.

The mean error is a colloquial phrase that refers to the average of all mistakes in a collection. In this context, an “error” refers to a measurement uncertainty or the difference between the measured value and the correct/true value. Measuring error, often known as observational error, is the more formal word for error. According to Figure 3, there is a relationship between the related mean error for different time interval data and TCRs. As shown in Figure 3, the data consisting of measurements per 1-minute interval has the lowest mean error. The MAE found for greater matrix sizes is larger when the TCRs are higher.

FIGURE 3: Plot of MAE versus TCR for different sampling rates.

According to Figure 4, there is a correlation between the number of significant singular values and the rate of mistake. For the first 100 single values, the 5-second dataset has the greatest MAE, followed by 10-seconds, 20-seconds, and 1-minute time interval dataset that has the lowest MAE. There is practically no inaccuracy in any dataset after the first 100 single values. A dataset’s size has a substantial impact on singular values and the correctness of reconstructed data.

FIGURE 4: Plot of MAE versus the number of singular values, r for the dataset.

A smart distribution system’s data is compressed in this part to see how well the approach works. To sum up, more singular values are required to fulfill TCR as a data set grows in size, as shown by the experimental findings. Nevertheless, increasing the number of singular values will reduce the amount of data that has to be compressed. As a result, there are fewer errors when the data is rebuilt after it has been compressed. As a result, a greater amount of data must be transferred through a wider range of communication channels. By compressing information with a high number of singular values to fulfill the TCR, you will have to send more data. The TCR must be matched to the quantity of data to be compressed to maximize the connection bandwidth when transferring the compressed data. The data reconstruction error can be calculated between the reconstructed data g(i, j, s) and the original data F(i, j, s) using

(13) P s = 1 3 M N i = 0 a 1 j = 0 b 1 s = 0 2 g i , j , s F i , j , k .

In addition, the Mean Average Error (MSE) (calculated by averaging squared error) is another way to assess reconstruction accuracy.

The MAE is defined as [Reference Willmott and Matsuura29]

(14) MAE = i = 0 a 1 j = 0 b 1 s = 0 2 g i , j , s f i , j , s 2 .

A measure of the quality of compression and reconstruction is the signal-to-noise ratio (SNR). There are two ways to define the peak SNR [Reference Huynh-Thu and Ghanbari30]:

(15) PSNR dB = 10 log 10 Max i MAE ,

where Maxi is the maximum possible pixel value.

MD quantifies the greatest difference between original and reconstructed values. The average difference between original and reconstructed values is denoted as SSIM. For each of the formulas [Reference Moorthy and Bovik31],

(16) SSIM = i = 0 M 1 j = 0 N 1 s = 0 s 1 I i , j , s , MD = max 0 i M I i , j , s .

The video reconstruction error (MSE), signal-to-noise ratio (PSNR), matching distance (MD), and percent compression ratio (PCR) values are obtained as depicted in Figure 5. The satisfying results are obtained over the compression as depicted in Table 2. From Table 2 and Figure 5, the suggested methodology shows the highest performance over PSNR, MSE, and MD. As illustrated in the PSNR contours for the testing set in Figure 5, the PSNR improves as the compressed image bit rate increases. The results demonstrate a rising pattern in PSNR values, whereas MSE drops progressively as the compressed image bit rate improves. As a result, a higher compressed image bit rate means higher resolution images and fewer mistakes.

FIGURE 5: Image Quality metrics.

TABLE 2: Average Data Quality metrics.

The advantages of the existing mechanism in which the high compression ratio was obtained but it takes more time for compression. Hence, it can be overcome by the proposed mechanism.

5.1. Complexity Analysis

In general, the total number of states is approximately equal to 2 N for computing nth RLE number (F( N )). Notice that each state denotes a function call to ‘RPSO with RLE()′ which does nothing but makes another recursive call. Therefore, the total time taken to compute the nth number of the sequence is O(2 N ).

In digital file compression, duplication is the most important issue. If N 1 and N 2 signify the amounts of data holding units in the raw and encoded images, correspondingly, the compression ratio, CR, could be specified as CR = N 1/N2 as well as the data duplication of the original image as RD = 1 − (1/CR). From Table 3 and Figure 6, the proposed methodology can acquire the exact ratio of compression (10 : 1) when compared to Haar [Reference Yamnenko and Levchenko17] (10 : 16.5) and Cosine [Reference Yamnenko and Levchenko17] (10 : 17.2) techniques.

TABLE 3: Average compression ratio.

FIGURE 6: Compression ratio.

6. Conclusion

Data compression techniques such as RPSO compression were examined and evaluated in this article. Data from a smart distribution system was used to evaluate the algorithm with 1-minute, 10-second, 20-second, and 5-second interval datasets. The results obtained demonstrate that the amount of the data has a considerable influence on the proposed approach. Larger datasets require more significant single values to achieve low error rates. When used to the smart grid, RPSO may be used as a simple and uncomplicated compression method. The significant singular values will provide a decent approximation when the compressed data has to be rebuilt using the recommended approach. Depending on the number of singular values used, RPSO compression can lower the volume of data. However, if you have a lot of data, you should consider using the proposed compression technique, which has a faster execution time and low error rates. Also, a lot of the pointed advantages exist. There will be some disadvantages also; in the proposed work, the order of bytes is independent. Compilation needs to be done again for compression. Errors may occur while transmitting data. We have to decompress the previous data. The disadvantages can be overcome in future work.

Data Availability

The data used to verify the study’s findings can be obtained from the author on request.

Conflicts of Interest

The authors state that the publishing of this paper does not include any conflicts of interest.

References

Memos, V. A., Psannis, K. E., Ishibashi, Y., Kim, B.-G., and Gupta, B. B., “An efficient algorithm for media-based surveillance system (EAMSuS) in IoT smart city framework,Future Generation Computer Systems, vol. 83, pp. 619628, 2018.CrossRefGoogle Scholar
Gao, Z. J. and Wang, J. S., “Application of smart grid technology in the coalmine power system,Applied Mechanics and Materials, vol. 441, pp. 236239, 2014.CrossRefGoogle Scholar
Tsakanikas, V. and Dagiuklas, T., “Video surveillance systems-current status and future trends,Computers & Electrical Engineering, vol. 70, pp. 736753, 2018.CrossRefGoogle Scholar
Shidik, G. F., Noersasongko, E., Nugraha, A., Andono, P. N., Jumanto, J., and Kusuma, E. J., “A systematic review of intelligence video surveillance: trends, techniques, frameworks, and datasets,IEEE Access, vol. 7, pp. 170457170473, 2019.CrossRefGoogle Scholar
Hampapur, A., Brown, L., Connell, J. et al., “Smart video surveillance: exploring the concept of multiscale spatiotemporal tracking,IEEE Signal Processing Magazine, vol. 22, no. 2, pp. 3851, 2005.CrossRefGoogle Scholar
Duan, L., Liu, J., Yang, W., Huang, T., and Gao, W., “Video coding for machines: a paradigm of collaborative compression and intelligent analytics,IEEE Transactions on Image Processing, vol. 29, pp. 86808695, 2020.CrossRefGoogle Scholar
Yoon, C.-S., Jung, H.-S., Park, J.-W., Lee, H.-G., Yun, C.-H., and Lee, Y. W., “A cloud-based UTOPIA smart video surveillance system for smart cities,Applied Sciences, vol. 10, no. 18, p. 6572, 2020.CrossRefGoogle Scholar
Rajavel, R., Ravichandran, S. K., Harimoorthy, K., Nagappan, P., and Gobichettipalayam, K. R., “IoT-based smart healthcare video surveillance system using edge computing,” Journal of Ambient Intelligence and Humanized Computing, pp. 113, 2021.CrossRefGoogle Scholar
Hamza, R., Hassan, A., Huang, T., Ke, L., and Yan, H., “An efficient cryptosystem for video surveillance in the internet of things environment,Complexity, vol. 2019, Article ID 1625678, 11 pages, 2019.CrossRefGoogle Scholar
Prakash, V. R., “An enhanced coding algorithm for efficient video coding,Journal of the Institute of Electronics and Computer, vol. 1, pp. 2838, 2019.CrossRefGoogle Scholar
Nandhini, S. A. and Radha, S., “Efficient compressed sensing-based security approach for video surveillance application in wireless multimedia sensor networks,Computers & Electrical Engineering, vol. 60, pp. 175192, 2017.CrossRefGoogle Scholar
Jiang, T., Wang, H., Daneshmand, M., and Wu, D., “Cognitive radio-based smart grid traffic scheduling with binary exponential backoff,IEEE Internet of Things Journal, vol. 4, no. 6, pp. 20382046, 2017.CrossRefGoogle Scholar
Jumar, R., Maaß, H., and Hagenmeyer, V., “Comparison of lossless compression schemes for high rate electrical grid time series for smart grid monitoring and analysis,Computers & Electrical Engineering, vol. 71, pp. 465476, 2018.CrossRefGoogle Scholar
Elhannachi, S. A., Benamrane, N., and Abdelmalik, T.-A., “Adaptive medical image compression based on lossy and lossless embedded zero tree methods,Journal of Information Processing Systems, vol. 13, pp. 4056, 2017.Google Scholar
Sophia, P. E. and Anitha, J., “Enhanced method of using contour let transform for medical image compression,International Journal of Advanced Intelligence Paradigms, vol. 14, no. 1/2, pp. 107121, 2019.CrossRefGoogle Scholar
Kalidoss, T., Rajasekaran, L., Kanagasabai, K., Sannasi, G., and Kannan, A., “QoS aware trust based routing algorithm for wireless sensor networks,Wireless Personal Communications, vol. 110, no. 4, pp. 16371658, 2020.CrossRefGoogle Scholar
Yamnenko, I. and Levchenko, V., “Video-data compression using wavelet analysis,” in Proceedings of the 2019 IEEE 20th International Conference on Computational Problems of Electrical Engineering (CPEE), pp. 14, Lviv-Slavske, Ukraine, September 2019.Google Scholar
Rahimunnisha, S. and Sudhavani, G., “Novel complexity reduction technique for multi-view video compression using HCD based genetic algorithm,Design Engineering, vol. 2021, no. 6, pp. 32193228, 2021.Google Scholar
Xu, L., Liu, H., Yan, X., Liao, S., and Zhang, X., “Optimization method for trajectory combination in surveillance video synopsis based on genetic algorithm,Journal of Ambient Intelligence and Humanized Computing, vol. 6, no. 5, pp. 623633, 2015.CrossRefGoogle Scholar
Darwish, S. M. and Almajtomi, A. A. J., “Metaheuristic-based vector quantization approach: a new paradigm for neural network-based video compression,Multimedia Tools and Applications, vol. 80, no. 5, pp. 73677396, 2021.CrossRefGoogle Scholar
Abduljabbar, R. B., Hamid, O. K., and Alhyani, N. J., “Features of genetic algorithm for plain text encryption,International Journal of Electrical and Computer Engineering, vol. 11, no. 1, p. 434, 2021.Google Scholar
Azam, B., Ur Rahman, S., Irfan, M. et al., “A reliable auto-robust analysis of blood smear images for classification of microcytic hypochromic anemia using gray level matrices and gabor feature bank,Entropy, vol. 22, no. 9, p. 1040, 2020.CrossRefGoogle ScholarPubMed
Xiang, D., Tang, T., Zhao, L., and Su, Y., “Superpixel generating algorithm based on pixel intensity and location similarity for SAR image classification,IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 6, pp. 14141418, 2013.CrossRefGoogle Scholar
Kwon, S., Kim, H., and Park, K. S., “Validation of heart rate extraction using video imaging on a built-in camera system of a smartphone,” in Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 21742177, San Diego, CA, USA, August 2012.CrossRefGoogle Scholar
Tuncer, T., Dogan, S., Ertam, F., and Subasi, A., “A novel ensemble local graph structure based feature extraction network for EEG signal analysis,Biomedical Signal Processing and Control, vol. 61, Article ID 102006, 2020.CrossRefGoogle Scholar
Xie, H., Ren, Y., Long, W., Yang, X., and Tang, X., “Principal component analysis in projection and image domains—another form of spectral imaging in photon-counting CT,Institute of Electrical and Electronics Engineers Transactions on Biomedical Engineering, vol. 68, pp. 10741083, 2020.Google Scholar
Liu, W., Wang, Z., Zeng, N., Yuan, Y., Alsaadi, F. E., and Liu, X., “A novel randomised particle swarm optimizer,International Journal of Machine Learning and Cybernetics, vol. 12, no. 2, pp. 529540, 2021.CrossRefGoogle Scholar
Yong, Z., Li-Juan, Y., Qian, Z., and Xiao-Yan, S., “Multi-objective optimization of building energy performance using a particle swarm optimizer with less control parameters,Journal of Building Engineering, vol. 32, Article ID 101505, 2020.CrossRefGoogle Scholar
Willmott, C. and Matsuura, K., “Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance,Climate Research, vol. 30, pp. 7982, 2005.CrossRefGoogle Scholar
Huynh-Thu, Q. and Ghanbari, M., “The accuracy of PSNR in predicting video quality for different video scenes and frame rates,Telecommunication Systems, vol. 49, no. 1, pp. 3548, 2012.CrossRefGoogle Scholar
Moorthy, A. K. and Bovik, A. C., “Efficient motion weighted spatio-temporal video SSIM index,Human Vision and Electronic Imaging, vol. XV, Article ID 75271I, 2010.CrossRefGoogle Scholar
Figure 0

FIGURE 1: Schematic representation of the suggested methodology.

Figure 1

Algorithm 1: (RPSO)

Figure 2

TABLE 1: Computed CR, and number of singular values, r of compression.

Figure 3

FIGURE 2: Plot of the number of singular values versus TCR for the different datasets.

Figure 4

FIGURE 3: Plot of MAE versus TCR for different sampling rates.

Figure 5

FIGURE 4: Plot of MAE versus the number of singular values, r for the dataset.

Figure 6

FIGURE 5: Image Quality metrics.

Figure 7

TABLE 2: Average Data Quality metrics.

Figure 8

TABLE 3: Average compression ratio.

Figure 9

FIGURE 6: Compression ratio.