AUTOMATIC MODULATION CLASSIFICATION USING DEEP LEARNING POLAR FEATURE

: The automatic modulation classification of signals is of great importance in modern communications, especially on cognitive radio. Several methods have been used in this field, the most important of which is the classification of modulation automatically using Deep Learning, where the methods depend on the convolution neural network, which is one of the Deep Learning networks, achieved high accuracy in classifying the modulation, so the proposed network depends on the type of deep learning CNN consisting of four blocks, each block contains a set of symmetric and asymmetric filters. The network also contains Max Pool. In this paper, the features extracted in phase-squaring and polar have been combined for the input, which helps in extending the input, that is, an increase in the features inside the network. It also contributes to improving the accuracy of classifying the higher-order modulation through the Polar plane. The dataset RadioML 2018.01A was adopted, which is used in the most recent research, where 11 types of modulation normal-class: (FM, GMSK, QPSK, BPSK, 0QPSK, AM-SSB-SC, 4ASK, AM-DSB-SC, 16QAM, 8PSK,00K) were taken. A simulation of which can be found in Matlab 2021. The proposed network achieved 100% classification accuracy when the signal-to-noise ratio is greater or equal to 2 dB for 11 types of modulation. The results of the paper were compared with modern networks Baseline network, Visual Geometry Group network, and Residual Neural network. The comparison showed the superiority of the proposed network over these networks, as the proposed network achieved an accuracy equal to 100% at SNR 2 dB while BL achieved an accuracy equal to 72% at SNR 2 dB, RN, and VGG almost reach 93% at SNR 2 dB.


Introduction
Due to its importance in a variety of communication systems, including but not limited to many military and civilian applications such as cognitive radio and spectrum sharing, automatic modulation recognition has recently gotten a lot of attention from academics and the industry [1]. The classification of modulations is an old one, and it has been studied for a long time. However, recent advancements in machine learning, particularly deep learning approaches, are critical for improving the classification performance of modulations [2]. To detect the signal, it is necessary to know the type of modulation, so Automatic Modulation Recognition (AMR) is very important. To detect the signal, it is necessary to know the type of the modulation, so Automatic Modulation Recognition (AMR) is very important to detect the signal [3]. AMR depends either on the probabilistic (LB) method or on feature extraction (FB). LB techniques determine all potential modulation schemes for the signal probability functions chosen by the high probability value among them [4]. Although LB techniques can theoretically deliver the best answer, they are weak against model mismatch and suffer from high processing complexity. These disadvantages make LB-AMR difficult to employ in low-cost and real-time applications. In contrast, FB procedures are seen to be a suitable substitute for LB methods since they may provide good results with a great deal less computational complexity. (LB) is considered ineffective in a good way because it is affected by the following factors (phase and frequency offsets, timing troubles, and non-Gaussian noise patterns) [7]. FB depends on a certain statistical characteristic of the signal samples and is not based on the principle of probability [8]. In the FB method, you must first extract a feature from the input signal, then the signal can be classified. High-order cumulates are one of the methods of feature extraction [9], wave transformation is another method of feature extraction [10], and cyclostationary characteristics [11] are commonly utilized in the feature extraction procedure. In general, FB approaches are Although they are not ideal in a Bayesian sense, they are widely used because they are easy to implement [12]. They must, however, manually extract expert features from a large number of examples, resulting in significant computational complexity [13]. In recent years, deep learning has played an important role in the field of computer vision [14]. Some researchers have had a significant impact on the classification of studies regarding what is done regarding language processing [15] and also the allocation of some of the network's resources [16], [17].

Related Works
CNN networks are one of the more advanced types of DL. This means that they have many layers and are filtered by a large size as well. Image processing research is shown by [18] and [19], while computer vision research is shown by [20] and [21]. AlexNet [22] is one of the advanced and ready-made CNN networks.
GoogleNet [23] is also considered a modern network of the same type. The researcher O'Shea [5] developed a network of GoogleNet and created the networks VGG [24], and ResNet [25] in 2018. To demonstrate performance in terms of classification accuracy, A comparison is made between the proposed CNN with the networks ResNet and VGG. Network BL was also created by OSHEA. In this network, BL depends on extracting features of higher order moments such as (M21, M40, M41, M42, M43, M60, M61, and M62), and then one of the types of machine learning developed by the researcher T. Chen [26]. An aggregation model for gradient-enhanced trees (XGBoost) was utilized. To classify these features, In the second network, VGG, the features of this network are (IQ) 2×1024 and DL (CNN) classify, containing the convolution layer, and it becomes less every time until it reaches the softmax layer 1×11. In the third network RN, the features are also (IQ) 2×1024 and DL called "deep residual networks" (RN).
The overall problem statement is formulated as "It required Accurate feature extraction and classification leads to an accurate recognition of modulation type." The objective of the paper is:

1-
Extending the input to the network enhances the accuracy of categorization by allowing for an increase in the input's characteristics.

2-
The design of a dense network of CNN types helps to increase the efficiency of the classification accuracy.

3-
Through the above two points, it can be obtained the highest classification accuracy of the modulation compared to modern papers.
The following are the paper's contributions:

Dataset
Normal classes DATASET: The RadioML 2018.01A dataset is used in this work. (FM, GMSK, OQPSK, BPSK, QPSK, AM-SSB-SC, 4ASK, AM-DSB-SC, 16QAM, 8PSK,00K) This data was created by O'Shea [5] and is an updated version of the tools created in the search [27]. One value of the most secure data that has been used in recent papers in the field of automatic signal classification is the Included effects dataset (carrier frequency offset, CFO, multipath fading, thermal noise; symbol rate offset, SRO). The dataset comprises a bit-wide of 2.5 million bit-wide samples with SNRs between -20dB and 30dB. 1024-length modulation signal frames. where 80% are in favor of training and 20% of testing.

Chanel Model:
The CNN channel model is basic. Architecture, which is commonly used in the field of modeling class, is usually composed primarily of multiple convolutional layers (Conv) and maximum pool layers connected to one or more fully connected layers (FC) [6]. For any coordinate (a, b), the convolution in a binary filter with the input matrix is as in [6].
where it represents the weights in the filter (xi) and the input value (yi) with this equation: corresponding spatial range. The standard bias k is used to obtain the output of a convolutional layer as in [6].
The result of the equation will pass through the nonlinear activation function (g) in ConvNets to obtain a full feature map as in [6].
There are many functions, but the most famous is the corrected linear unit (ReLU), which applies an operation for each element in the z equation below as in [6].
The input size of the ReLU layer is usually the same. Either Max or Average Pool whose function is to reduce the input size of any layer placed behind it. This is why it is possible to increase the number of layers and form a network with great depth.

CNN Model Proposed
There is a topic that helps to increase the extracted features, which is the extension of the input to the proposed network, as shown in Fig.1. The new matrix become will contain four rows (I, Q, r, Ɵ), and the number of columns is the same as the number of the sign length n=1024, so the matrix will be 4 × 1024, as shown below after the time frame has been extended. In order to increase the accuracy of classification, try to increase the input features through the input matrix 4×1024.  Following the A-block, the B-block contains asymmetric filters (3×1) whose task is to extract vertical characteristics in the spatial dimension in a superior manner, reducing the trainable parameters by almost half when compared to (2×2) or (3×3) kernels. The literature [5] is cited in relation to the impact of accuracy performance.

INPUT FEATURES
The size of the feature must be lowered from 4×256 to 2×256 before entering Block C, which will be done by the average-pooling layer (A Pooling) (2×1) with stride (2 1). Because the average aggregation method reduces from 4×256 to 2×256 another with the ability to retain key information from the input, average pooling plays an important role in lowering computing complexity and affecting classification accuracy. In the convolution block (Conv) in each block, The layers can be categorized into three sequentially: convolution layers, a batch normalize, and a reactivation layer, which is utilized for convolutional operations throughout the proposed CNN, In order for the output to be as small as possible, a global average-pooling layer completely reduces the size of the feature map that was generated by CBlock2 to (1×128). Transmitted signals pass through a fully connected layer (FC). Then, the classification probability is calculated using a softmax layer (SM). The exact setup of the proposed Table 1 provides a brief overview of CNN's architecture and Table 2 shows the simulation numbers.

Results and Discussion
The paper method categorizes 11 types of modulation for the dataset RadioML 2018.01A. "Prove the way to expand the IQ with rƟ, with the use of a dense CNN network that contains many filters, knowing that the simulation is Matlab 2021b.To demonstrate the superiority of the input method described in this paper and the robustness of the proposed CNN network, The result will be compared in terms of accuracy with the latest research for the same dataset. such as. O'Shea [5] BL, VGG, and RN Net.    As shown in Fig. 3 at SNR = 2 dB, the proposed CNN almost reaches an accuracy of 100% while RN and VGG almost reach 93%, and BL almost reaches 72%. This shows the importance of the proposed network. The proposed CNN, VGG, and RN networks function similarly at low SNRs, whereas the proposed CNN network performs significantly better at high SNRs. At approximately 2 dB, it starts outperforming the network. CNN scales effectively up to this 512-1024 size, but bigger input windows may require additional scaling strategies due to memory, training, and dataset

Classification Accuracy for Each Type of Modulation
Classification accuracy for each type of adjustment and the classification of each modulation type is shown in Fig. 5, where the signal length is L = 1024. The most accurate classification types are FM (Green color) and AMDSBSC (yellow color), where the classification accuracy reaches 100% when the signal-to-noise ratio is greater than -5 dB, while PSK8 (fragmentary color with a triangle), QPSK (Red color with star), and QAM16(Red color with a triangle) required SNR greater than 4dB.
A confusion matrix is shown in Fig. 6 on the basis of the data given above. for visual analysis at 0 dB SNR. In accordance with the confusion matrix, the majority of the modulations are extremely well-known. However, because noise and the points are closer together, data mistakes are more likely to occur, and 0PSK, PSK8, QAM16, and QPSK are known to be harder to recognize. This results in reduced accuracy.

Conclusion:
In this paper, the proposed network CNN depends on extending the input from 2×1024 to 4×1024, by merging real and imaginary patterns with polar patterns, which leads to an increase in the extracted features and helps improve the classification of high-order modulation. The network is divided into a number of blocks, A, B, C1, and C2, which contribute to increasing the accuracy of the classification, reaching 100% at SNR≥2dB. The proposed CNN compares three different networks (BL, VGG, and RN) for the same dataset with normal classes and types of modulation (FM 4ASK, OOK, 8PSK, QPSK, 16QAM, AM-SSB-SC, BPSK, AM-SSB-SC, OQPSK, and GMSK) and achieves better results. The proposed CNN and RN networks perform similarly at low SNRs, but it can be seen that for SNR = 2dB, the suggested network achieves 100% classification accuracy, whereas the BL network achieves 72%, the VGG network 92%, and the RNN network 92%. This demonstrates our suggested network's advantages. From the above, it can be concluded that in the case of input only, I/Q, the results are not good enough. But when you combine I/Q with r /Ɵ, the extracted features will increase and it also helps in improving the classification accuracy of highorder modulation, and increasing the layers of the CNN network, it helps to improve the performance network with an increase in the time to be classified Future work will include classifying 24 types of modulation for the same dataset (Difficult classes) RadioML 2018.01A.