Commit dfae03b6 authored by emanueledalsasso's avatar emanueledalsasso
Browse files

Commit project files

parents
This diff is collapsed.
# SAR Image Despeckling by Deep Neural Networks: from a pre-trained model to an end-to-end training strategy
## Emanuele Dalsasso, Xiangli Yang, Loïc Denis, Florence Tupin, Wen Yang
## Abstract
_Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) images. Many different schemes have been proposed for the restoration of intensity SAR images. Among the different possible approaches, methods based on convolutional neural networks (CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration. CNN training requires good training data: many pairs of speckle-free / speckle-corrupted images. This is an issue in SAR applications, given the inherent scarcity of speckle-free images. To handle this problem, this paper analyzes different strategies one can adopt, depending on the speckle removal task one wishes to perform and the availability of multitemporal stacks of SAR data. The first strategy applies a CNN model, trained to remove additive white Gaussian noise from natural images, to a recently proposed SAR speckle removal framework: MuLoG (MUlti-channel LOgarithm with Gaussian denoising). No training on SAR images is performed, the network is readily applied to speckle reduction tasks. The second strategy considers a novel approach to construct a reliable dataset of speckle-free SAR images necessary to train a CNN model. Finally, a hybrid approach is also analyzed: the CNN used to remove additive white Gaussian noise is trained on speckle-free SAR images. The proposed methods are compared to other state-of-the-art speckle removal filters, to evaluate the quality of denoising and to discuss the pros and cons of the different strategies. Along with the paper, we make available the weights of the trained network to allow its usage by other researchers._
![summary_SAR-CNN](./img/proposedCNN.png)
## Resources
- [Paper (ArXiv)](https://arxiv.org/abs/2006.15559)
The material is made available under the **GNU General Public License v3.0**: Copyright 2020, Emanuele Dalsasso, Loïc Denis, Florence Tupin, of LTCI research lab - Télécom ParisTech, an Institut Mines Télécom school.
All rights reserved.
To cite the article:
@article{dalsasso2020sar,
title={SAR Image Despeckling by Deep Neural Networks: from a pre-trained model to an end-to-end training strategy},
author={Emanuele Dalsasso and Xiangli Yang and Loïc Denis and Florence Tupin and Wen Yang},
journal={arXiv preprint arXiv:2006.15559},
year={2020}
}
%% Cell type:markdown id: tags:
<a href="https://colab.research.google.com/github/emanueledalsasso/SAR-CNN/blob/master/SAR_CNN_test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
%% Cell type:markdown id: tags:
# SAR Image Despeckling by Deep Neural Networks: from a pre-trained model to an end-to-end training strategy
## Emanuele Dalsasso, Xiangli Yang, Loïc Denis, Florence Tupin, Wen Yang
The code is made available under the **GNU General Public License v3.0**: Copyright 2020, Emanuele Dalsasso, Loïc Denis, Florence Tupin, of LTCI research lab - Télécom ParisTech, an Institut Mines Télécom school.
All rights reserved.
Please note that the training set is only composed of **Sentinel-1** SAR images, thus this testing code is specific to this data.
%% Cell type:markdown id: tags:
## Resources
- [Paper (ArXiv)](https://arxiv.org/abs/2006.15559)
To cite the article:
@article{dalsasso2020sar,
title={SAR Image Despeckling by Deep Neural Networks: from a pre-trained model to an end-to-end training strategy},
author={Emanuele Dalsasso and Xiangli Yang and Loïc Denis and Florence Tupin and Wen Yang},
journal={arXiv preprint arXiv:2006.15559},
year={2020}
}
%% Cell type:markdown id: tags:
## 0. Enable GPU and save copy on Drive to enable editing
Runtime -> Change runtime type -> Hardware accelerator: GPU
File -> Save a copy in Drive
%% Cell type:markdown id: tags:
## 1. Download network weights and code
%% Cell type:code id: tags:
```
from google_drive_downloader import GoogleDriveDownloader as gdd
gdd.download_file_from_google_drive(file_id='1CgoG3f02uFzpA5PGcwKek9bitp_5T64q',
dest_path='./SAR-CNN-test.zip',
unzip=True)
```
%% Cell type:markdown id: tags:
## 2. Install compatible version of tensorflow
%% Cell type:code id: tags:
```
!pip install tensorflow-gpu==1.13.1
```
%% Cell type:markdown id: tags:
## A. Test on synthetic data
%% Cell type:code id: tags:
```
!python /content/SAR-CNN-test/main.py --test_dir=/content/test_synthetic
```
%% Cell type:markdown id: tags:
## B. Test on real data
Two real Sentinel-1 images can be found in the folder _/content/SAR-CNN-test/test_data/real_
To test on custom data, upload your single channel Sentinel-1 images in a numpy array with shape [ydim, xdim].
Results are stored in _/content/test_
At each time a test is run, clean the _/content/test_ directory otherwise the results will be overwritten.
%% Cell type:code id: tags:
```
!python /content/SAR-CNN-test/main.py --real_sar=1 --test_dir=/content/test_real
```
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment