Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Feb 19;19(4):871.
doi: 10.3390/s19040871.

Parallel Connected Generative Adversarial Network with Quadratic Operation for SAR Image Generation and Application for Classification

Affiliations

Parallel Connected Generative Adversarial Network with Quadratic Operation for SAR Image Generation and Application for Classification

Chu He et al. Sensors (Basel). .

Abstract

Thanks to the availability of large-scale data, deep Convolutional Neural Networks (CNNs) have witnessed success in various applications of computer vision. However, the performance of CNNs on Synthetic Aperture Radar (SAR) image classification is unsatisfactory due to the lack of well-labeled SAR data, as well as the differences in imaging mechanisms between SAR images and optical images. Therefore, this paper addresses the problem of SAR image classification by employing the Generative Adversarial Network (GAN) to produce more labeled SAR data. We propose special GANs for generating SAR images to be used in the training process. First, we incorporate the quadratic operation into the GAN, extending the convolution to make the discriminator better represent the SAR data; second, the statistical characteristics of SAR images are integrated into the GAN to make its value function more reasonable; finally, two types of parallel connected GANs are designed, one of which we call PWGAN, combining the Deep Convolutional GAN (DCGAN) and Wasserstein GAN with Gradient Penalty (WGAN-GP) together in the structure, and the other, which we call CNN-PGAN, applying a pre-trained CNN as a discriminator to the parallel GAN. Both PWGAN and CNN-PGAN consist of a number of discriminators and generators according to the number of target categories. Experimental results on the TerraSAR-X single polarization dataset demonstrate the effectiveness of the proposed method.

Keywords: Generative Adversarial Network (GAN); Synthetic Aperture Radar (SAR); image classification; quadratic operation.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Assembled CNN architecture.
Figure 2
Figure 2
Schematic of a GAN.
Figure 3
Figure 3
Quadratic operation.
Figure 4
Figure 4
Two different architectures of our proposed GAN.
Figure 5
Figure 5
DataSet1 (7 categories): single-polarized (VV) data acquired by TerraSAR-X over Guangdong, China (intensity image).
Figure 6
Figure 6
DataSet2 (there are 6 categories, but only 5 are considered).
Figure 7
Figure 7
Overview of our framework.
Figure 8
Figure 8
The accuracy for different amounts of generated data for training on DataSet1 and DataSet2. The blue bar and the yellow bar indicate the classification accuracies of PGAN and CNN-PGAN, respectively. The first column is the classification result on the real 128 training images. The second, third, fourth, and fifth columns are the results of training data augmentation using 32, 64, 128, and 256 generated images, respectively.
Figure 9
Figure 9
The classification result of different numbers of real images for training the entire network on DataSet1.
Figure 10
Figure 10
The classification result of different numbers of augmented training data for DataSet1. The first column is the classification result on the real 128 training images. The second, third, fourth, and fifth columns are the results of training data augmentation using 32, 64, 128, and 256 images augmented by the simple augmentation strategy, respectively.
Figure 11
Figure 11
The accuracy for different training datasets. The blue bar and the yellow bar indicate the classification accuracies of PWGAN and CNN-PGAN, respectively. (a) The first column is the classification result on the training dataset (128 real images and 64 augmented images). The second, third, fourth, and fifth columns are the results of training data augmentation using 32, 64, 128, and 256 generated images, respectively; (b) The first column is the classification result on the training dataset (128 real images and 64 augmented images). The second, third, fourth, and fifth columns are the results of training data augmentation using 32, 64, 128, and 256 augmented images, respectively. The test set is always the same.

Similar articles

Cited by

References

    1. Moreira A., Prats-Iraola P., Younis M., Krieger G., Hajnsek I., Papathanassiou K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013;1:6–43. doi: 10.1109/MGRS.2013.2248301. - DOI
    1. Dudczyk J., Wnuk M. The utilization of unintentional radiation for identification of the radiation sources; Proceedings of the 2004 34th European Microwave Conference; Amsterdam, The Netherlands. 12–14 October 2004; pp. 777–780.
    1. Dudczyk J., Kawalec A. Optimizing the minimum cost flow algorithm for the phase unwrapping process in SAR radar. Bull. Pol. Acad. Sci. Tech. Sci. 2014;62:511–516. doi: 10.2478/bpasts-2014-0055. - DOI
    1. Zhao Q., Principe J.C. Support vector machines for SAR automatic target recognition. IEEE Trans. Aerosp. Electron. Syst. 2001;37:643–654. doi: 10.1109/7.937475. - DOI
    1. Patnaik R., Casasent D. Automatic Target Recognition XV. Volume 5807. International Society for Optics and Photonics; Bellingham, WA, USA: 2005. MINACE filter classification algorithms for ATR using MSTAR data; pp. 100–112.

LinkOut - more resources