README.md 2.78 KB
Newer Older
emanueledalsasso's avatar
emanueledalsasso committed
1
**A video tutorial with instruction to download sample
emanueledalsasso's avatar
emanueledalsasso committed
2
data from the [Airbus webpage](https://www.intelligence-airbusds.com/imagery/sample-imagery/) and
3
4
put it in the proper input format required by the framework is available at https://youtu.be/RCXm07jSUEA . Results on two
images are provided in the *Additional_results* folder**
emanueledalsasso's avatar
emanueledalsasso committed
5

Emanuele Dalsasso's avatar
Emanuele Dalsasso committed
6
# As if by magic: self-supervised training of deep despeckling networks with MERLIN
emanueledalsasso's avatar
emanueledalsasso committed
7
8
9
10
11
12
13
14
15
16
17
## Emanuele Dalsasso, Loïc Denis, Florence Tupin
## Abstract
_Speckle fluctuations seriously limit the interpretability of synthetic aperture radar (SAR) images. Speckle reduction has thus been the subject of numerous works spanning at least four decades. Techniques based on deep neural networks have recently achieved a new level of performance in terms of SAR image restoration quality.
Beyond the design of suitable network architectures or the selection of adequate loss functions, the construction of training sets is of uttermost importance. So far, most approaches have considered a supervised training strategy: the networks are trained to produce outputs as close as possible to speckle-free reference images. Speckle-free images are generally not available, which requires resorting to 
natural or optical images
or the selection of stable areas in long time series to circumvent the lack of ground truth. Self-supervision, on the other hand, avoids the use of speckle-free images.
We introduce a self-supervised strategy based on the separation of the real and imaginary parts of single-look complex SAR images, called MERLIN (coMplex sElf-supeRvised despeckLINg), and show that it offers a straightforward way to train all kinds of deep despeckling networks. Networks trained with MERLIN take into account the spatial correlations due to the SAR transfer function specific to a given sensor and imaging mode. By requiring only a single image, and possibly exploiting large archives, MERLIN opens the door to hassle-free as well as large-scale training of despeckling networks. The code of the trained models is made freely available at https://gitlab.telecom-paris.fr/RING/MERLIN._

![summary_MERLIN](./img/MERLIN_framework.png)


emanueledalsasso's avatar
emanueledalsasso committed
18
Two documents with additional results on TerraSAR-X images in Stripmap mode and in High Resolution SpotLight (HS) mode
emanueledalsasso's avatar
emanueledalsasso committed
19
20
are provided in the *Additional_results* folder.

emanueledalsasso's avatar
emanueledalsasso committed
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
## Resources

- [Paper (ArXiv)](https://arxiv.org/abs/2110.13148)

To cite the article:

```
E. Dalsasso, L. Denis and F. Tupin, (2021),
"As if by magic: self-supervised training of deep despeckling networks with MERLIN",
arXiv preprint arXiv:2110.13148.

```

## Licence

The material is made available under the **GNU General Public License v3.0**: Copyright 2020, Emanuele Dalsasso, Loïc Denis, Florence Tupin, of LTCI research lab - Télécom Paris, an Institut Mines Télécom school.
All rights reserved.