Super-resolution of geosynchronous synthetic aperture radar images using dialectical GANs

Dear editor, The concept of geosynchronous synthetic aperture radar (GEO SAR) system was conceived in an effort to realize a quick observation of emergency disasters (e.g., landslides and earthquakes) [1]. The novel GEO SAR system has the significant advantages of short-revisit time and large coverage and facilitates the nearly continuous observation of target regions, unlike the currently operating low earth orbit (LEO) SARs [2]. However, the system’s integration time requirement of minutes to even hours in obtaining a satisfactory azimuth resolution produces some difficulties in imaging. A long integration time introduces highly curved trajectories, which makes conventional imaging algorithms face lots of challenges [3]. Moreover, the impacts of the varied atmosphere, the unstable clutter, and the complex radio frequency interference during the lengthy duration of the integration are likely to seriously jeopardize the image quality [3–6]. The aforementioned problems may be prevented by a reduction of the integration time; however, low azimuth resolution and the invisibility of many fine structures in the scene, such as roads and building silhouettes, are the corresponding disadvantages. This study aims to resolve the contradiction between azimuth resolution and integration time in GEO SAR imaging through the application of an image super-resolution method based on dialectical-generative adversarial networks (DiGANs). Di-GANs are the type of GANs that are explained under the framework of dialectical logic and can be applied to translate different images from different sensors, including the realization of image super-resolution [7]. Prior to the introduction of deep learning methods, the super-resolution methods used for SAR images were based on model-based SAR imaging methods such as spectrum combining technology [8]. Although deep learning has produced remarkable super-resolution results for optical images, little attention has been paid to super-resolution for SAR images [7]. Considering the differences in structure and content between SAR and optical images, we demonstrated the processing performance of Di-GANs in achieving super-resolution for GEO SAR images. The Di-GANs super-resolution method. DiGANs generate super-resolution images (SRIs) by intelligently learning the texture of input style images while simultaneously preserving the information conveyed by the input content images rather than simply superimposing the two inputs [7]. The networks can produce SRIs using low-resolution images (LRIs) and higher-resolution referenceresolution images (RRIs) as the content and style input images, respectively. As GANs, Di-GANs comprise two networks: a generator and a discriminator. The generator is


Dear editor,
The concept of geosynchronous synthetic aperture radar (GEO SAR) system was conceived in an effort to realize a quick observation of emergency disasters (e.g., landslides and earthquakes) [1].The novel GEO SAR system has the significant advantages of short-revisit time and large coverage and facilitates the nearly continuous observation of target regions, unlike the currently operating low earth orbit (LEO) SARs [2].However, the system's integration time requirement of minutes to even hours in obtaining a satisfactory azimuth resolution produces some difficulties in imaging.A long integration time introduces highly curved trajectories, which makes conventional imaging algorithms face lots of challenges [3].Moreover, the impacts of the varied atmosphere, the unstable clutter, and the complex radio frequency interference during the lengthy duration of the integration are likely to seriously jeopardize the image quality [3][4][5][6].The aforementioned problems may be prevented by a reduction of the integration time; however, low azimuth resolution and the invisibility of many fine structures in the scene, such as roads and building silhouettes, are the corresponding disadvantages.
This study aims to resolve the contradiction between azimuth resolution and integration time in GEO SAR imaging through the application of an image super-resolution method based on dialectical-generative adversarial networks (Di-GANs).Di-GANs are the type of GANs that are explained under the framework of dialectical logic and can be applied to translate different images from different sensors, including the realization of image super-resolution [7].Prior to the introduction of deep learning methods, the super-resolution methods used for SAR images were based on model-based SAR imaging methods such as spectrum combining technology [8].Although deep learning has produced remarkable super-resolution results for optical images, little attention has been paid to super-resolution for SAR images [7].Considering the differences in structure and content between SAR and optical images, we demonstrated the processing performance of Di-GANs in achieving super-resolution for GEO SAR images.
The Di-GANs super-resolution method.Di-GANs generate super-resolution images (SRIs) by intelligently learning the texture of input style images while simultaneously preserving the information conveyed by the input content images rather than simply superimposing the two inputs [7].The networks can produce SRIs using low-resolution images (LRIs) and higher-resolution referenceresolution images (RRIs) as the content and style input images, respectively.
As GANs, Di-GANs comprise two networks: a generator and a discriminator.The generator is where x represnts the LRI, y represents the RRI, F (•) is the image's feature map, • F is the Frobenius norm of the matrices, S(•) is the texture definition (e.g., spatial gram matrices), G(•) is the mapping function of the generator, f g is the generator's loss function, E[•] is the expectation operation, λ gd is the weight of the GANs, and D(•) is the mapping function of the discriminator.The processing chain of the super-resolution algorithm is divided into two phases: the training and operational phases.The dialectical processing of the two networks generates super-resolution output images which preserve the RRIs' textures and original LRIs' contents.A global chart of the method and the processing dataflow are shown in the blue dashed frames in Figure 1(a).
Simulation and discussion.Since both, the ALOS PALSAR and the GEO SAR, operate in the L band, their images will display similar scat-tering behaviors.In this regard, we selected ALOS PALSAR images as RRIs in the simulations.The dataset generation procedures are explained in the leftmost portion of Figure 1(a).To simulate the GEO SAR LRIs, we utilized the classical "figure-8" GEO SAR orbit.We set the perigee as the imaging orbit position and focused the echo signal using the back-projection algorithm.A 4-MHz bandwidth was applied to generate a moderate range resolution, while only 10 s integration time was adopted to obtain an azimuth resolution worse than 220 m (3-dB resolution).We simulated 35 GEO SAR sub-images (about 14-km swath) derived from 3 ALOS PALSAR single polarization (FBS) observation images of Shanghai, China.The intensities of the ALOS PALSAR images were regarded as the radar cross section data that were used for generating the GEO SAR images.For training the Di-GANs processing, the sub-images and the corresponding RRIs were divided into 3500 pair-patches, 3200 of which were used as the training dataset and 300 as the test dataset.Approximately 71.7 h were consumed to train the network using a laptop with NVidia Q2000M GPU.The two sample sub-image outputs of the Di-GANs and their relative input data are shown in Figure 1(b).
The results of the simulation revealed that the original GEO SAR input LRIs were transformed into SRIs by virtue of Di-GANs, thus achieving a quality comparable to the RRIs.Isolated point-like targets (benchmarks A and B), shown in Figure 1(c), were the samples used for providing a reliable evaluation of the performance of the Di-GANs.After being refined in the Di-GANs networks, the targets' ground-range and azimuth resolutions became comparable with those of the RRIs.In particular, a salient improvement of 3 to 5 times was observed for the original poor azimuth resolution.With the enhanced resolution, the sample images in scene number 1 of Figure 1(b) displayed finer features, such as for the ships on the sea in benchmarks A and B and the establishments near the seashore in benchmark C. Similarly, the images in scene number 2 of Figure 1(c) showed the same Di-GANs performance for an urban scenario.The GEO SAR input LRIs had low azimuth resolution, characterized by blurry roads, rivers, and building outlines.Some features near the seashores were also indiscernible.In contrast, those features are made clear and can be recognized using the transformed number 2 GEO SAR SRI by the networks that Di-GANs learn the texture of the RRIs and imitate it to refine the structures in the input LRIs.For both cases, the advantages of Di-GANs in balancing super-resolution processing between retrieving fine textures and preserving content were validated.The excellent performance of Di-GANs was essentially derived from the dialectical collaboration of the generator and discriminator networks and the texture and content losses in the generator's loss function.
In general, the simulation results verify the effectiveness of the Di-GANs for obtaining superresolution GEO SAR images.The technique may effectively decrease the dependence of GEO SAR imaging on long integration times and facilitate the achievement of a satisfactory azimuth resolution with short integration time that reduces the impacts of atmospheric disturbance, clutter decorrelation, and curved trajectories in GEO SAR.

Figure 1 (
Figure 1 (Color online) (a) Di-GANs' global frame and dataflow; (b) two sample sub-images processed by Di-GANs; (c) resolution analysis of point-like targets (AR = azimuth resolution and GR = ground range resolution).